Skip navigation
Help

Adobe

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

It was March 2nd, 2011, and I was fifteen-years old. I was in the clouds. My font family, Expletus Sans, had just gone live on the Google Webfonts Directory (now simply called Google Fonts). Plenty of positive feedback and a generous reward from Google had made me expect a lot of it. But it didn’t take very long before I started laughing at the high regard I once had for Expletus Sans, and its silly name. The elegance I once saw in it was soon mixed with a decent dose of clumsiness and amateurism. However, Expletus Sans did provide me with the motivation and opportunity to invest in my skills, and keep designing typefaces.

0
Your rating: None
Original author: 
Lars Kappert

  

We are talking and reading a lot about responsive Web design (RWD) these days, but very little attention is given to Web applications. Admittedly, RWD still has to be ironed out. But many of us believe it to be a strong concept, and it is here to stay. So, why don’t we extend this topic to HTML5-powered applications? Because responsive Web applications (RWAs) are both a huge opportunity and a big challenge, I wanted to dive in.

Building a RWA is more feasible than you might think. In this article, we will explore ideas and solutions. In the first part, we will set up some important concepts. We will build on these in the second part to actually develop a RWA, and then explore how scalable and portable this approach is.

Part 1: Becoming Responsible

Some Lessons Learned

It’s not easy to admit, but recently it has become more and more apparent that we don’t know many things about users of our websites. Varying screen sizes, device features and input mechanisms are pretty much RWD’s reasons for existence.

From the lessons we’ve learned so far, we mustn’t assume too much. For instance, a small screen is not necessarily a touch device. A mobile device could be over 1280 pixels wide. And a desktop could have a slow connection. We just don’t know. And that’s fine. This means we can focus on these things separately without making assumptions: that’s what responsiveness is all about.

Progressive Enhancement

The “JavaScript-enabled” debate is so ’90s. We need to optimize for accessibility and indexability (i.e. SEO) anyway. Claiming that JavaScript is required for Web apps and, thus, that there is no real need to pre-render HTML is fair (because SEO is usually not or less important for apps). But because we are going responsive, we will inherently pay a lot attention to mobile and, thus, to performance as well. This is why we are betting heavily on progressive enhancement.

Responsive Web Design

RWD has mostly to do with not knowing the screen’s width. We have multiple tools to work with, such as media queries, relative units and responsive images. No matter how wonderful RWD is conceptually, some technical issues still need to be solved.

start-image_mini
Not many big websites have gone truly responsive since The Boston Globe. (Image credits: Antoine Lefeuvre)

Client-Side Solutions

In the end, RWD is mostly about client-side solutions. Assuming that the server basically sends the same initial document and resources (images, CSS and JavaScript) to every device, any responsive measures will be taken on the client, such as:

  • applying specific styles through media queries;
  • using (i.e. polyfilling) <picture> or @srcset to get responsive images;
  • loading additional content.

Some of the issues surrounding RWD today are the following:

  • Responsive images haven’t been standardized.
  • Devices still load the CSS behind media queries that they never use.
  • We lack (browser-supported) responsive layout systems (think flexbox, grid, regions, template).
  • We lack element queries.

Server-Side Solutions: Responsive Content

Imagine that these challenges (such as images not being responsive and CSS loading unnecessarily) were solved on all devices and in all browsers, and that we didn’t have to resort to hacks or polyfills in the client. This would transfer some of the load from the client to the server (for instance, the CMS would have more control over responsive images).

But we would still face the issue of responsive content. Although many believe that the constraints of mobile help us to focus, to write better content and to build better designs, sometimes it’s simply not enough. This is where server-side solutions such as RESS and HTTP Client Hints come in. Basically, by knowing the device’s constraints and features up front, we can serve a different and optimized template to it.

Assuming we want to COPE, DRY and KISS and stuff, I think it comes down to where you want to draw the line here: the more important that performance and content tailored to each device is, the more necessary server-side assistance becomes. But we also have to bet on user-agent detection and on content negation. I’d say that this is a big threshold, but your mileage may vary. In any case, I can see content-focused websites getting there sooner than Web apps.

Having said that, I am focusing on RWAs in this article without resorting to server-side solutions.

Responsive Behavior

RWD is clearly about layout and design, but we will also have to focus on responsive behavior. It is what makes applications different from websites. Fluid grids and responsive images are great, but once we start talking about Web applications, we also have to be responsive in loading modules according to screen size or device capability (i.e. pretty much media queries for JavaScript).

For instance, an application might require GPS to be usable. Or it might contain a large interactive table that just doesn’t cut it on a small screen. And we simply can’t set display: none on all of these things, nor can we build everything twice.

We clearly need more.

Part 2: Building RWAs

To quickly recap, our fundamental concepts are:

  • progressive enhancement,
  • responsive design,
  • responsive behavior.

Fully armed, we will now look into a way to build responsive, context-aware applications. We’ll do this by declaratively specifying modules, conditions for loading modules, and extended modules or variants, based on feature detection and media queries. Then, we’ll dig deeper into the mechanics of dependency injection to see how all of this can be implemented.

Declarative Module Injection

We’ll start off by applying the concepts of progressive enhancement and mobile first, and create a common set of HTML, CSS and JavaScript for all devices. Later, we’ll progressively enhance the application based on content, screen size, device features, etc. The foundation is always plain HTML. Consider this fragment:


<div data-module="myModule">
    <p>Pre-rendered content</p>
</div>

Let’s assume we have some logic to query the data-module attribute in our document, to load up the referenced application module (myModule) and then to attach it to that element. Basically, we would be adding behavior that targets a particular fragment in the document.

This is our first step in making a Web application responsive: progressive module injection. Also, note that we could easily attach multiple modules to a single page in this way.

Conditional Module Injection

Sometimes we want to load a module only if a certain condition is met — for instance, when the device has a particular feature, such as touch or GPS:


<div data-module="find/my/dog" data-condition="gps">
    <p>Pre-rendered fallback content if GPS is unavailable.</p>
</div>

This will load the find/my/dog module only if the geolocation API is available.

Note: For the smallest footprint possible, we’ll simply use our own feature detection for now. (Really, we’re just checking for 'geolocation' in navigator.) Later, we might need more robust detection and so delegate this task to a tool such as Modernizr or Has.js (and possibly PhoneGap in hybrid mode).

Extended Module Injection

What if we want to load variants of a module based on media queries? Take this syntax:


<div data-module="myModule" data-variant="large">
    <p>Pre-rendered content</p>
</div>

This will load myModule on small screens and myModule/large on large screens.

For brevity, this single attribute contains the condition and the location of the variant (by convention). Programmatically, you could go mobile first and have the latter extend from the former (or separated modules, or even the other way around). This can be decided case by case.

Media Queries

Of course, we couldn’t call this responsive if it wasn’t actually driven by media queries. Consider this CSS:


@media all and (min-width: 45em) {
	body:after {
		content: 'large';
		display: none;
	}
}

Then, from JavaScript this value can be read:


var size = window.getComputedStyle(document.body,':after').getPropertyValue('content');

And this is why we can decide to load the myModule/large module from the last example if size === "large", and load myModule otherwise. Being able to conditionally not load a module at all is useful, too:


<div data-module="myModule" data-condition="!small">
    <p>Pre-rendered content</p>
</div>

There might be cases for media queries inside module declarations:


<div data-module="myModule" data-matchMedia="min-width: 800px">
    <p>Pre-rendered content</p>
</div>

Here we can use the window.matchMedia() API (a polyfill is available). I normally wouldn’t recommend doing this because it’s not very maintainable. Following breakpoints as set in CSS seems logical (because page layout probably dictates which modules to show or hide anyway). But obviously it depends on the situation. Targeted element queries may also prove useful:


<div data-module="myModule" data-matchMediaElement="(min-width: 600px)"></div>

Please note that the names of the attributes used here represent only an example, a basic implementation. They’re supposed to clarify the idea. In a real-world scenario, it might be wise to, for example, namespace the attributes, to allow for multiple modules and/or conditions, and so on.

Device Orientation

Take special care with device orientation. We don’t want to load a different module when the device is rotated. So, the module itself should be responsive, and the page’s layout might need to accommodate for this.

Connecting The Dots

The concept of responsive behavior allows for a great deal of flexibility in how applications are designed and built. We will now look into where those “modules” come in, how they relate to application structure, and how this module injection might actually work.

Applications and Modules

We can think of a client-side application as a group of application modules that are built with low-level modules. As an example, we might have User and Message models and a MessageDetail view to compose an Inbox application module, which is part of an entire email client application. The details of implementation, such as the module format to be used (for example, AMD, CommonJS or the “revealing module” pattern), are not important here. Also, defining things this way doesn’t mean we can’t have a bunch of mini-apps on a single page. On the other hand, I have found this approach to scale well to applications of any size.

A Common Scenario

An approach I see a lot is to put something like <div id="container"> in the HTML, and then load a bunch of JavaScript that uses that element as a hook to append layouts or views. For a single application on a single page, this works fine, but in my experience it doesn’t scale well:

  • Application modules are not very reusable because they rely on a particular element to be present.
  • When multiple applications or application modules are to be instantiated on a single page, they all need their own particular element, further increasing complexity.

To solve these issues, instead of letting application modules control themselves, what about making them more reusable by providing the element they should attach to? Additionally, we don’t need to know which modules must be loaded up front; we will do that dynamically. Let’s see how things come together using powerful patterns such as Dependency Injection (DI) and Inversion of Control (IOC).

Dependency Injection

You might have wondered how myModule actually gets loaded and instantiated.

Loading the dependency is pretty easy. For instance, take the string from the data-module attribute (myModule), and have a module loader fetch the myModule.js script.

Let’s assume we are using AMD or CommonJS (either of which I highly recommended) and that the module exports something (say, its public API). Let’s also assume that this is some kind of constructor that can be instantiated. We don’t know how to instantiate it because we don’t know exactly what it is up front. Should we instantiate it using new? What arguments should be passed? Is it a native JavaScript constructor function or a Backbone view or something completely different? Can we make sure the module attaches itself to the DOM element that we provide it with?

We have a couple of possible approaches here. A simple one is to always expect the same exported value — such as a Backbone view. It’s simple but might be enough. It would come down to this (using AMD and a Backbone view):


var moduleNode = document.querySelector('[data-module]'),
    moduleName = node.getAttribute('data-module');

require([moduleName], function(MyBackBoneView) {
    new MyBackBoneView({
        el: moduleNode
    });
})

That’s the gist of it. It works fine, but there are even better ways to apply this pattern of dependency injection.

IOC Containers

Let’s take a library such as the excellent wire.js library by cujoJS. An important concept in wire.js is “wire specs,” which essentially are IOC containers. It performs the actual instantiation of the application modules based on a declarative specification. Going this route, the data-module should reference a wire spec (instead of a module) that describes what module to load and how to instantiate it, allowing for practically any type of module. Now, all we need to do is pass the reference to the spec and the viewNode to wire.js. We can simply define this:


wire([specName, { viewNode: moduleNode }]);

Much better. We let wire.js do all of the hard work. Besides, wire has a ton of other features.

In summary, we can say that our declarative composition in HTML (<div data-module="">) is parsed by the composer, and consults the advisor about whether the module should be loaded (data-condition) and which module to load (data-module or data-variant), so that the dependency injector (DI, wire.js) can load and apply the correct spec and application module:

Declarative Composition

Detections for screen size and device features that are used to build responsive applications are sometimes implemented deep inside application logic. This responsibility should be laid elsewhere, decoupled more from the particular applications. We are already doing our (responsive) layout composition with HTML and CSS, so responsive applications fit in naturally. You could think of the HTML as an IOC container to compose applications.

You might not like to put (even) more information in the HTML. And honestly, I don’t like it at all. But it’s the price to pay for optimized performance when scaling up. Otherwise, we would have to make another request to find out whether and which module to load, which defeats the purpose.

Wrapping Up

I think the combination of declarative application composition, responsive module loading and module extension opens up a boatload of options. It gives you a lot of freedom to implement application modules the way you want, while supporting a high level of performance, maintainability and software design.

Performance and Build

Sometimes RWD actually decreases the performance of a website when implemented superficially (such as by simply adding some media queries or extra JavaScript). But for RWA, performance is actually what drives the responsive injection of modules or variants of modules. In the spirit of mobile first, load only what is required (and enhance from there).

Looking at the build process to minify and optimize applications, we can see that the challenge lies in finding the right approach to optimize either for a single application or for reusable application modules across multiple pages or contexts. In the former case, concatenating all resources into a single JavaScript file is probably best. In the latter case, concatenating resources into a separate shared core file and then packaging application modules into separate files is a sound approach.

A Scalable Approach

Responsive behavior and complete RWAs are powerful in a lot of scenarios, and they can be implemented using various patterns. We have only scratched the surface. But technically and conceptually, the approach is highly scalable. Let’s look at some example scenarios and patterns:

  • Sprinkle bits of behavior onto static content websites.
  • Serve widgets in a portal-like environment (think a dashboard, iGoogle or Netvibes). Load a single widget on a small screen, and enable more as screen resolution allows.
  • Compose context-aware applications in HTML using reusable and responsive application modules.

In general, the point is to maximize portability and reach by building on proven concepts to run applications on multiple platforms and environments.

Future-Proof and Portable

Some of the major advantages of building applications in HTML5 is that they’re future-proof and portable. Write HTML5 today and your efforts won’t be obsolete tomorrow. The list of platforms and environments where HTML5-powered applications run keeps growing rapidly:

  • As regular Web applications in browsers;
  • As hybrid applications on mobile platforms, powered by Apache Cordova (see note below):
    • iOS,
    • Android,
    • Windows Phone,
    • BlackBerry;
  • As Open Web Apps (OWA), currently only in Firefox OS;
  • As desktop applications (such as those packaged by the Sencha Desktop Packager):
    • Windows,
    • OS X,
    • Linux.

Note: Tools such as Adobe PhoneGap Build, IBM Worklight and Telerik’s Icenium all use Apache Cordova APIs to access native device functionality.

Demo

You might want to dive into some code or see things in action. That’s why I created a responsive Web apps repository on GitHub, which also serves as a working demo.

Conclusion

Honestly, not many big websites (let alone true Web applications) have gone truly responsive since The Boston Globe. However, looking at deciding factors such as cost, distribution, reach, portability and auto-updating, RWAs are both a huge opportunity and a big challenge. It’s only a matter of time before they become much more mainstream.

We are still looking for ways to get there, and we’ve covered just one approach to building RWAs here. In any case, declarative composition for responsive applications is quite powerful and could serve as a solid starting point.

(al) (ea)

© Lars Kappert for Smashing Magazine, 2013.

0
Your rating: None
Original author: 
Wilson Page

  

When the mockups for the new Financial Times application hit our desks in mid-2012, we knew we had a real challenge on our hands. Many of us on the team (including me) swore that parts of interface would not be possible in HTML5. Given the product team’s passion for the new UI, we rolled up our sleeves and gave it our best shot.

We were tasked with implementing a far more challenging product, without compromising the reliable, performant experience that made the first app so successful.

promo-500-compr

We didn’t just want to build a product that fulfilled its current requirements; we wanted to build a foundation that we could innovate on in the future. This meant building with a maintenance-first mentality, writing clean, well-commented code and, at the same time, ensuring that our code could accommodate the demands of an ever-changing feature set.

In this article, I’ll discuss some of the changes we made in the latest release and the decision-making behind them. I hope you will come away with some ideas and learn from our solutions as well as our mistakes.

Supported Devices

The first Financial Times Web app ran on iPad and iPhone in the browser, and it shipped in a native (PhoneGap-esque) application wrapper for Android and Windows 8 Metro devices. The latest Web app is currently being served to iPad devices only; but as support is built in and tested, it will be rolled out to all existing supported platforms. HTML5 gives developers the advantage of occupying almost any mobile platform. With 2013 promising the launch of several new Web application marketplaces (eg. Chrome Web Store and Mozilla Marketplace), we are excited by the possibilities that lie ahead for the mobile Web.

Fixed-Height Layouts

The first shock that came from the new mockups was that they were all fixed height. By “fixed height,” I mean that, unlike a conventional website, the height of the page is restricted to the height of the device’s viewport. If there is more content than there is screen space, overflow must be dealt with at a component level, as opposed to the page level. We wanted to use JavaScript only as a last resort, so the first tool that sprang to mind was flexbox. Flexbox gives developers the ability to declare flexible elements that can fill the available horizontal or vertical space, something that has been very tricky to do with CSS. Chris Coyier has a great introduction to flexbox.

Using Flexbox in Production

Flexbox has been around since 2009 and has great support on all the popular smartphones and tablets. We jumped at the chance to use flexbox when we found out how easily it could solve some of our complex layouts, and we started throwing it at every layout problem we faced. As the app began to grow, we found performance was getting worse and worse.

We spent a good few hours in Chrome Developers Tools’ timeline and found the culprit: Shock, horror! — it was our new best friend, flexbox. The timeline showed that some layouts were taking close to 100 milliseconds; reworking our layouts without flexbox reduced this to 10 milliseconds! This may not seem like a lot, but when swiping between sections, 90 milliseconds of unresponsiveness is very noticeable.

Back to the Old School

We had no other choice but to tear out flexbox wherever we could. We used 100% height, floats, negative margins, border-box sizing and padding to achieve the same layouts with much greater performance (albeit with more complex CSS). Flexbox is still used in some parts of the app. We found that its impact on performance was less expensive when used for small UI components.

layout-time-with-flexbox-500_comp
Page layout time with flexbox

layout-time-without-flexbox-500_comp
Page layout time without flexbox

Truncation

The content of a fixed-height layout will rarely fit its container; eventually it has to overflow. Traditionally in print, designers have used ellipses (three dots) to solve this problem; however, on the Web, this isn’t the simplest technique to implement.

Ellipsis

You might be familiar with the text-overflow: ellipsis declaration in CSS. It works great, has awesome browser support, but has one shortfall: it can’t be used for text that spans multiple lines. We needed a solution that would insert an ellipsis at the point where the paragraph overflows its container. JavaScript had to step in.

ellipsis-500_mini
Ellipsis truncation is used throughout.

After an in-depth research and exploration of several different approaches, we created our FTEllipsis library. In essence, it measures the available height of the container, then measures the height of each child element. When it finds the child element that overflows the container, it caps its height to a sensible number of lines. For WebKit-based browsers, we use the little-known -webkit-line-clamp property to truncate an element’s text by a set number of lines. For non-WebKit browsers, the library allows the developer to style the overflowing container however they wish using regular CSS.

Modularization

Having tackled some of the low-level visual challenges, we needed to step back and decide on the best way to manage our application’s views. We wanted to be able to reuse small parts of our views in different contexts and find a way to architect rock-solid styling that wouldn’t leak between components.

One of the best decisions we made in implementing the new application was to modularize the views. This started when we were first looking over the designs. We scribbled over printouts, breaking the page down into chunks (or modules). Our plan was to identify all of the possible layouts and modules, and define each view (or page) as a combination of modules sitting inside the slots of a single layout.

Each module needed to be named, but we found it very hard to describe a module, especially when some modules could have multiple appearances depending on screen size or context. As a result, we abandoned semantic naming and decided to name each component after a type of fruit — no more time wasted thinking up sensible, unambiguous names!

An example of a module’s markup:


<div class="apple">
  <h2 class="apple_headline">{{headline}}</h2>
  <h3 class="apple_sub-head">{{subhead}}</h3>
  <div class="apple_body">{{body}}</div>
</div>

An example of a module’s styling:


.apple {}

.apple_headline {
  font-size: 40px;
}

.apple_sub-head {
  font-size: 20px;
}

.apple_body {
  font-size: 14px;
  column-count: 2;
  color: #333;
}

Notice how each class is prefixed with the module’s name. This ensures that the styling for one component will never affect another; every module’s styling is encapsulated. Also, notice how we use just one class in our CSS selectors; this makes our component transportable. Ridding selectors of any ancestral context means that modules may be dropped anywhere in our application and will look the same. This is all imperative if we want to be able to reuse components throughout the application (and even across applications).

What If a Module Needs Interactions?

Each module (or fruit) has its own markup and style, which we wrote in such a way that it can be reused. But what if we need a module to respond to interactions or events? We need a way to bring the component to life, but still ensure that it is unbound from context so that it can be reused in different places. This is a little trickier that just writing smart markup and styling. To solve this problem, we wrote FruitMachine.

Reusable Components

FruitMachine is a lightweight library that assembles our layout’s components and enables us to declare interactions on a per-module basis. It was inspired by the simplicity of Backbone views, but with a little more structure to keep “boilerplate” code to a minimum. FruitMachine gives our team a consistent way to work with views, while at the same time remaining relatively unopinionated so that it can be used in almost any view.

The Component Mentality

Thinking about your application as a collection of standalone components changes the way you approach problems. Components need to be dumb; they can’t know anything of their context or of the consequences of any interactions that may occur within them. They can have a public API and should emit events when they are interacted with. An application-specific controller assembles each layout and is the brain behind everything. Its job is to create, control and listen to each component in the view.

For example, to show a popover when a component named “button” is clicked, we would not hardcode this logic into the button component. Instead “button” would emit a buttonclicked event on itself every time its button is clicked; the view controller would listen for this event and then show the popover. By working like this, we can create a large collection of components that can be reused in many different contexts. A view component may not have any application-specific dependencies if it is to be used across projects.

Working like this has simplified our architecture considerably. Breaking down our views into components and decoupling them from our application focuses our decision-making and moves us away from baking complex, heavily dependent modules into our application.

The Future of FruitMachine

FruitMachine was our solution to achieve fully transportable view components. It enables us to quickly define and assemble views with minimal effort. We are currently using FruitMachine only on the client, but server-side (NodeJS) usage has been considered throughout development. In the coming months, we hope to move towards producing server-side-rendered websites that progressively enhance into a rich app experience.

You can find out more about FruitMachine and check out some more examples in the public GitHub repository.

Retina Support

The Financial Times’ first Web app was released before the age of “Retina” screens. We retrofitted some high-resolution solutions, but never went the whole hog. For our designers, 100% Retina support was a must-have in the new application. We developers were sick of maintaining multiple sizes and resolutions of each tiny image within the UI, so a single vector-based solution seemed like the best approach. We ended up choosing icon fonts to replace our old PNGs, and because they are implemented just like any other custom font, they are really well supported. SVG graphics were considered, but after finding a lack of support in Android 2.3 and below, this option was ruled out. Plus, there is something nice about having all of your icons bundled up in a single file, whilst not sacrificing the individuality of each graphic (like sprites).

Our first move was to replace the Financial Times’ logo image with a single glyph in our own custom icon font. A font glyph may be any color and size, and it always looks super-sharp and is usually lighter in weight than the original image. Once we had proved it could work, we began replacing every UI image and icon with an icon font alternative. Now, the only pixel-based image in our CSS is the full-color logo on the splash screen. We used the powerful but rather archaic-looking FontForge to achieve this.

Once past the installation phase, you can open any font file in FontForge and individually change the vector shape of any character. We imported SVG vector shapes (created in Adobe Illustrator) into suitable character slots of our font and exported as WOFF and TTF font types. A combination of WOFF and TTF file formats are required to support iOS, Android and Windows devices, although we hope to rely only on WOFFs once Android gains support (plus, WOFFs are around 25% smaller in file size than TTFs).

icon-font-500-compr
The Financial Times’ icon font in Font Forge

Images

Article images are crucial for user engagement. Our images are delivered as double-resolution JPEGs so that they look sharp on Retina screens. Our image service (running ImageMagick) outputs JPEGs at the lowest possible quality level without causing noticeable degradation (we use 35 for Retina devices and 70 for non-Retina). Scaling down retina size images in the browser enables us to reduce JPEG quality to a lower level than would otherwise be possible without compression artifacts becoming noticeable. This article explains this technique in more detail.

It’s worth noting that this technique does require the browser to work a little harder. In old browsers, the work of scaling down many large images could have a noticeable impact on performance, but we haven’t encountered any serious problems.

Native-Like Scrolling

Like almost any application, we require full-page and subcomponent scrolling in order to manage all of the content we want to show our users. On desktop, we can make use of the well-established overflow CSS property. When dealing with the mobile Web, this isn’t so straightforward. We require a single solution that provides a “momentum” scrolling experience across all of the devices we support.

overflow: scroll

The overflow: scroll declaration is becoming usable on the mobile Web. Android and iOS now support it, but only since Android 3.0 and iOS 5. IOS 5 came with the exciting new -webkit-overflow-scrolling: touch property, which allows for native momentum-like scrolling in the browser. Both of these options have their limitations.

Standard overflow: scroll and overflow: auto don’t display scroll bars as users might expect, and they don’t have the momentum touch-scrolling feel that users have become accustomed to from their native apps. The -webkit-overflow-scrolling: touch declaration does add momentum scrolling and scroll bars, but it doesn’t allow developers to style the scroll bars in any way, and has limited support (iOS 5+ and Chrome on Android).

A Consistent Experience

Fragmented support and an inconsistent feel forced us to turn to JavaScript. Our first implementation used the TouchScroll library. This solution met our needs, but as our list of supported devices grew and as more complex scrolling interactions were required, working with it became trickier. TouchScroll lacks IE 10 support, and its API interface is difficult to work with. We also tried Scrollability and Zynga Scroller, neither of which have the features, performance or cross-browser capability we were looking for. Out of this problem, FTScroller was developed: a high-performance, momentum-scrolling library with support for iOS, Android, Playbook and IE 10.

FTScroller

FTScroller’s scrolling implementation is similar to TouchScroll’s, with a flexible API much like Zynga Scroller. We added some enhancements, such as CSS bezier curves for bouncing, requestAnimationFrame for smoother frame rates, and support for IE 10. The advantage of writing our own solution is that we could develop a product that exactly meets our requirements. When you know the code base inside out, fixing bugs and adding features is a lot simpler.

FTScroller is dead simple to use. Just pass in the element that will wrap the overflowing content, and FTScroller will implement horizontal or vertical scrolling as and when needed. Many other options may be declared in an object as the second argument, for more custom requirements. We use FTScroller throughout the Financial Times’ Web app for a consistent cross-platform scrolling experience.

A simple example:


var container = document.getElementById('scrollcontainer');
var scroller = new FTScroller(container);

The Gallery

The part of our application that holds and animates the page views is known as the “gallery.” It consists of three divisions: left, center and right. The page that is currently in view is located in the center pane. The previous page is positioned off screen in the left-hand pane, and the next page is positioned off screen in the right-hand pane. When the user swipes to the next page, we use CSS transitions to animate the three panes to the left, revealing the hidden right pane. When the transition has finished, the right pane becomes the center pane, and the far-left pane skips over to become the right pane. By using only three page containers, we keep the DOM light, while still creating the illusion of infinite pages.

Web
Infinite scrolling made possible with a three-pane gallery

Making It All Work Offline

Not many Web apps currently offer an offline experience, and there’s a good reason for that: implementing it is a bloody pain! The application cache (AppCache) at first glance appears to be the answer to all offline problems, but dig a little deeper and stuff gets nasty. Talks by Andrew Betts and Jake Archibald explain really well the problems you will encounter. Unfortunately, AppCache is currently the only way to achieve offline support, so we have to work around its many deficiencies.

Our approach to offline is to store as little in the AppCache as possible. We use it for fonts, the favicon and one or two UI images — things that we know will rarely or never need updating. Our JavaScript, CSS and templates live in LocalStorage. This approach gives us complete control over serving and updating the most crucial parts of our application. When the application starts, the bare minimum required to get the app up and running is sent down the wire, embedded in a single HTML page; we call this the preload.

We show a splash screen, and behind the scenes we make a request for the application’s full resources. This request returns a big JSON object containing our JavaScript, CSS and Mustache templates. We eval the JavaScript and inject the CSS into the DOM, and then the application launches. This “bootstrap” JSON is then stored in LocalStorage, ready to be used when the app is next started up.

On subsequent startups, we always use the JSON from LocalStorage and then check for resource updates in the background. If an update is found, we download the latest JSON object and replace the existing one in LocalStorage. Then, the next time the app starts, it launches with the new assets. If the app is launched offline, the startup process is the same, except that we cannot make the request for resource updates.

Images

Managing offline images is currently not as easy as it should be. Our image requests are run through a custom image loader and cached in the local database (IndexedDB or WebSQL) so that the images can be loaded when a network connection is not present. We never load images in the conventional way, otherwise they would break when users are offline.

Our image-loading process:

  1. The loader scans the page for image placeholders declared by a particular class.
  2. It takes the src attribute of each image placeholder found and requests the source from our JavaScript image-loader library.
  3. The local database is checked for each image. Failing that, a single HTTP request is made listing all missing images.
  4. A JSON array of Base64-encoded images is returned from the HTTP response and stored separately in the local database.
  5. A callback is fired for each image request, passing the Base64 string as an argument.
  6. An <img> element is created, and its src attribute is set to the Base64 data-URI string.
  7. The image is faded in.

I should also mention that we compress our Base64-encoded image strings in order to fit as many images in the database as possible. My colleague Andrew Betts goes into detail on how this can be achieved.

In some cases, we use this cool trick to handle images that fail to load:


<img src="image.jpg" onerror="this.style.display='none';" />

Ever-Evolving Applications

In order to stay competitive, a digital product needs to evolve, and as developers, we need to be prepared for this. When the request for a redesign landed at the Financial Times, we already had a fast, popular, feature-rich application, but it wasn’t built for change. At the time, we were able to implement small changes to features, but implementing anything big became a slow process and often introduced a lot of unrelated regressions.

Our application was drastically reworked to make the new requirements possible, and this took a lot of time. Having made this investment, we hope the new application not only meets (and even exceeds) the standard of the first product, but gives us a platform on which we can develop faster and more flexibly in the future.

(al)

© Wilson Page for Smashing Magazine, 2013.

0
Your rating: None
Original author: 
(author unknown)

I know you’ve been asked this plenty of times already, but: no new vendor prefixes, right? Right?

Nope, none! They’re great in theory but turns out they fail in practice, so we’re joining Mozilla and the W3C CSS WG and moving away them. There’s a few parts to this.

Firstly, we won’t be migrating the existing -webkit- prefixed properties to a -chrome- or -blink- prefix, that’d just make extra work for everyone. Secondly, we inherited some existing properties that are prefixed. Some, like -webkit-transform, are standards track and we work with the CSS WG to move ahead those standards while we fix any remaining issues in our implementation and we’ll unprefix them when they’re ready. Others, like -webkit-box-reflect are not standards track and we’ll bring them to standards bodies or responsibly deprecate these on a case-by-case basis. Lastly, we’re not introducing any new CSS properties behind a prefix.

Pinky swear?

Totes. New stuff will be available to experiment with behind a flag you can turn on in about:flags called “Experimental Web Platform Features”. When the feature is ready, it’ll graduate to Canary, and then follow its ~12 week path down through Dev Channel, Beta to all users at Stable.

The Blink prefix policy is documented and, in fact, WebKit just nailed down their prefix policy going forward. If you’re really into prefix drama (and who isn’t!) Chris Wilson and I discussed this a lot more on the Web Ahead podcast [37:20].

How long before we can try Blink out in Chrome?

Blink’s been in Chrome Canary as of the day we announced it. The codebase was 99.9% the same when Blink launched, so no need to rush out and check everything. All your sites should be pretty much the same.

Chrome 27 has the Blink engine, and that’s available on the beta channel for
Win, Mac, Linux, ChromeOS and Android. (See the full beta/stable/dev/canary
view
).

While the internals are apt to be fairly different, will there be any radical changes to the rendering side of things in the near future?

Nothing too alarming, layout and CSS stuff is all staying the same. Grid layout is still in development, though, and our Windows text rendering has been getting a new backend that we can hook up soon, greatly boosting the quality of webfont rendering there.

We’re also interested in better taking advantage of multiple cores on machines, so the more we can move painting, layout (aka reflow), and style recalculation to a separate thread, but the faster everyone’s sites will render. We’re already doing multi-threaded painting on ChromeOS and Android, and looking into doing it on Mac & Windows. If you’re interested in these experimental efforts or watching new feature proposals, take a look at the blink-dev mailing list. A recent proposed experiment is called Oilpan, where we’ll look into the advantages of moving the implementation of Chrome’s DOM into JavaScript.

Will features added to Blink be contributed back to the WebKit project? Short term; long term?

Since Blink launched there’s been a few patches that have been landed in both Blink and WebKit, though this is expected to decline in the long-term, as the code bases will diverge.

When are we likely to start seeing Blink-powered versions of Chrome on Android? Is it even possible on iOS, or is iOS Chrome still stuck with a Safari webview due to Apple’s policies?

Blink is now in the Chrome Beta for Android. Chrome for iOS, due to platform limitations, is based on the WebKit-based WebView that’s provided by iOS.

Part of this move seems to be giving Google the freedom to remove old or disused features that have been collecting dust in WebKit for ages. There must be a few things high on that list—what are some of those things, and how can we be certain their removal won’t lead to the occasional broken website?

A few old ’n crusty things that we’re looking at removing: the isindex attribute, RangeException, and XMLHttpRequestException. Old things that have little use in the wild and just haven’t gotten a spring cleaning from the web platform for ages.

Now, we don’t want to break the web, and that’s something that web browser engineers have always been kept very aware of. We carefully gauge real-world usage of things like CSS and DOM features before deprecating anything. At Google we have a copy of the web that we run queries against, so we have a pretty OK idea of what CSS and JavaScript out there is using.

Blink also has over 32,000 tests in its test suite, and manual confirmation that over 100 sites work great before every release ships. And we’re working closely with the W3C and Adobe to share tests and testing infrastructure across browsers, with the goals of reducing maintenance burden, improving interoperability, and increasing test coverage. Eventually we’d like all new features to ship with shared conformance tests, ensuring interoperability even as we add cutting-edge stuff.

Still, any deprecation has to be done responsibly. There’s now a draft Blink process for deprecating features which includes:

  • Anonymous metrics to understand how much any specific feature is used “in the wild”
  • ”Intent to deprecate” emails that hit blink-dev months before anything is
    removed
  • Warnings that you’ll find in your DevTools console if you’re using anything
    deprecated
  • Mentions on the Chromium blog like this Chrome 27
    wrap-up
    .

Did part of the decision to branch away from WebKit involve resistance to adding a Dart VM? WebKit’s goals explicitly mention JavaScript, and Apple representatives have been fairly vocal about not seeing a need.

Nope, not at all. The decision was made by the core web platform engineers. Introducing a new VM to a browser introduces considerable maintenance cost (we saw this with V8 and JavaScriptCore both in WebKit) and right now Dart isn’t yet ready to be considered for an integration with Blink. (more on that in a sec). Blink’s got strong principles around compatibility risk and this guides a lot of the decisions around our commitments to potential features as they are proposed. You can hear a more complete answer here from Darin Fisher, one of the Chrome web platform leads.

Have any non-WebKit browsers recently expressed an interest in Dart? A
scripting language that only stands to work in one browser sounds a little
VBScript-y.

Not yet, but since Dart compiles to JavaScript and runs across the modern web, it’s not gated by other browsers integrating the VM. But it’s still early days, Dart has not yet reached a stable 1.0 milestone and that there are still technical challenges with the Dart VM around performance and memory management. Still, It’s important to point out that Dart is an open source project, with a bunch of external contributors and committers.

Let me take a moment to provide my own perspective on Dart. :) Now, as you know, I’m a JavaScript guy, so early on, I took a side and and considered Dart an enemy. JavaScript should win; Dart is bad! But then I came to realize the Dart guys aren’t just setting out to improve the authoring and scalability of web application development. They also really want the web to win.  Now I’ve recently spoke about how The Mobile Web Is In Trouble, and clarified that my priorities are seeing it provide a fantastic user experience to everyone. For me, seeing the mobile web be successful trumps language wars and certainly quibbling over syntax. So I’m happy to see developers embrace the authoring advantages of Coffeescript, the smart subset of JavaScript strict mode, the legendary Emscripten & asm.js combo, the compiler feedback of TypeScript and the performance ambitions of Dart. It’s worth trying out technologies that can leapfrog the current expectations of the user experience that we can deliver. Our web is worth it.

Will Opera be using the Chromium version of Blink wholesale, as far as you know? Are we likely to see some divergence between Opera and Chrome?

As I understand it, Opera Mobile, Opera Desktop, and Opera Mini will all be based on Chromium. This means that they’ll not only share the exact version of Blink that Chrome uses, but also the same graphics stack, JavaScript engine, and networking stack. Already, Opera has contributed some great things to Blink and we’re excited about what’s next.

Why the name “Blink,” anyway?

Haha. Well… it’s a two parter. First, Blink evokes a certain feeling of speed and simplicity—two core principles of Chrome. Then, Chrome has a little tradition of slightly ironic names. Chrome itself is all about minimizing the browser chrome, and the Chromebook Pixel is all about not seeing any pixels at all. So naturally, it fits that Blink will never support the infamous <blink> tag. ;)

<3z

0
Your rating: None
Original author: 
Andrew Cunningham


The Galaxy S 4's display is a sizable step forward for PenTile AMOLED, according to DisplayMate's Raymond Soneira.

Florence Ion

We've already given you our subjective impressions of Samsung's Galaxy S 4 and its 1080p AMOLED display, but for those of you who hunger for quantitative data, Dr. Raymond Soneira of DisplayMate has given the phone an in-depth shakedown. Soneira compares the screen's brightness, contrast, color gamut, and power consumption to both the Galaxy S III (which also uses an AMOLED display) and the IPS panel in the iPhone 5. What he found was that Samsung's AMOLED technology is still fighting against some of its inherent weaknesses, but it has made great strides forward even since the Galaxy S III was released last year.

To recap: both the S III and S 4 use PenTile AMOLED screens, which use a slightly different pixel arrangement than traditional LCD screens. A pixel in a standard LCD panel has one red, one green, and one blue stripe; PenTile uses alternating red-green-blue-green subpixels, taking advantage of the eye's sensitivity to green to display the same image using fewer total subpixels. These screens cost less to manufacture but can have issues with color accuracy and text crispness. The backlight for each type of display is also different—white LEDs behind the iPhone's display shine through the red, green, and blue subpixels to create an image, while the AMOLED subpixels are self-lit. This has implications for brightness, contrast, and power consumption.


A close-up shot of PenTile AMOLED in the Nexus One, when the tech was much less mature. Luke Hutchinson

We'll try to boil Soneira's findings down to their essence. One of the S 4's benefits over its predecessor is (obviously) its pixel density, which at 441 ppi is considerably higher than either its predecessor or the iPhone 5. Soneira says that this helps it to overcome the imbalance between PenTile's green subpixels and its less numerous red and blue ones, which all but banishes PenTile's "fuzzy text" issues:

Read 5 remaining paragraphs | Comments

0
Your rating: None
Original author: 
samzenpus

An anonymous reader writes "Facebook on Friday released its Android launcher called Home. The company also updated its Facebook app, adding in new permissions to allow it to collect data about the apps you are running. Facebook has set up Home to interface with the main Facebook app on Android to do all the work. In fact, the main Facebook app features all the required permissions letting the Home app meekly state: 'THIS APPLICATION REQUIRES NO SPECIAL PERMISSIONS TO RUN.' As such, it’s the Facebook app that’s doing all the information collecting. It’s unclear, however, if it will do so even if Facebook Home is not installed. Facebook may simply be declaring all the permissions the Home launcher requires, meaning the app only starts collecting data if Home asks it to."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None
Original author: 
Sean Gallagher


The MT.Gox lookalike site that delivered malware to unwitting Bitcoiners.

In another example of the security mantra of "be careful what you click," at least one Bitcoin trader has been robbed in a forum "phishing" attack designed specifically to ride the hype around the digital currency. The attack attempts to use Java exploits or fake Adobe updates to install malware, and it's one of the first targeted attacks aimed at the burgeoning business of Bitcoin exchanges.

The bait for the attack was a post to a Bitcoin traders' forum announcing that MT.Gox was going to start handling exchanges of Litecoins, a Bitcoin alternative. The post advertised a live chat on the topic at a link provided to mtgox-chat.info. That site, which used stolen code and style to masquerade as the legitimate MT.Gox site, then prompted victims to update their Java plugin and offered a forged Adobe updater.

The scam was first reported on reddit earlier this week, when a redditor reported spotting the fake site and its attempt to drop malware. While the attack was originally described by one of its victims as a "Java zero-day" exploit, it actually uses either a Java exploit or a fake Adobe updater to deliver its malware payload. That payload is DarkComet, a fairly common "remote administration tool" and keylogger. The attackers not only stole credentials for the victim's MT.Gox account, but they took other passwords as well.

Read 8 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Dan Goodin

Aurich Lawson / Thinkstock

Tens of thousands of websites, some operated by The Los Angeles Times, Seagate, and other reputable companies, have recently come under the spell of "Darkleech," a mysterious exploitation toolkit that exposes visitors to potent malware attacks.

The ongoing attacks, estimated to have infected 20,000 websites in the past few weeks alone, are significant because of their success in targeting Apache, by far the Internet's most popular Web server software. Once it takes hold, Darkleech injects invisible code into webpages, which in turn surreptitiously opens a connection that exposes visitors to malicious third-party websites, researchers said. Although the attacks have been active since at least August, no one has been able to positively identify the weakness attackers are using to commandeer the Apache-based machines. Vulnerabilities in Plesk, Cpanel, or other software used to administer websites is one possibility, but researchers aren't ruling out the possibility of password cracking, social engineering, or attacks that exploit unknown bugs in frequently used applications and OSes.

Researchers also don't know precisely how many sites have been infected by Darkleech. The server malware employs a sophisticated array of conditions to determine when to inject malicious links into the webpages shown to end users. Visitors using IP addresses belonging to security and hosting firms are passed over, as are people who have recently been attacked or who don't access the pages from specific search queries. The ability of Darkleech to inject unique links on the fly is also hindering research into the elusive infection toolkit.

Read 14 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Scott Gilbertson

Look Ma, no floats! Image: Abobe

HTML5 and CSS 3 offer web developers new semantic tags, native animation tools, server-side fonts and much more, but that’s not the end of the story. In fact, for developers slogging away in the web design trenches, one of the most promising parts of CSS 3 is still just over the horizon — true page layout tools.

While it’s possible to create amazingly complex layouts using tools like CSS floats, positioning rules and the odd bit of JavaScript, none of those tools were actually created explicitly for laying out content, which is why it’s amazingly complex to get them working the way you want across browsers.

Soon, however, you’ll be able to throw out your floats and embrace a better way — the CSS Flexible Box Model, better known as simply Flexbox. Flexbox enables you to create complex layouts with only a few lines of code — no more floats and “clearfix” hacks.

Perhaps even more powerful — especially for those building responsive websites — the Flexbox order property allows you to create layouts completely independent of the HTML source order. Want the footer at the top of the page for some reason? No problem, just set your footer CSS to order: 1;. Flexbox also makes it possible to do vertical centering. Finally.

We’ve looked at Flexbox in the past, but, unfortunately the spec has undergone a serious re-write since then, which renders older code obsolete. If you’d like to get up to speed with the new syntax, the Adobe Developer Blog recently published a guide to working with Flexbox by developer Steven Bradley.

Bradley walks through the process of using Flexbox in both mobile and desktop layouts, rearranging source order and elements to get both layouts working with a fraction of the code it would take to do the same using floats and other, older layout tools. The best way to wrap your head around Flexbox is to see it in action, so be sure to follow the links to Bradley’s demo page using either Chrome, Opera or Firefox 20+.

For some it may still be too early to use Flexbox. Browser support is improving, but obviously older browsers will never support Flexbox, so bear that in mind. Opera 12 supports the new syntax, no prefix necessary. Chrome supports the new syntax, but needs the -webkit prefix. Like Opera, Firefox 20+ Firefox 22 supports the unprefixed version of the new spec. Prior to v22 (currently in the beta channel), Firefox supports the old syntax. IE 10 supports the older Flexbox syntax. Most mobile browsers support the older syntax, though that is starting to change. [Update: Mozilla developer Daniel Holbert, who is working on the Flexbox code in Firefox, wrote to let me know that the Flexbox support has been pushed back to Firefox 22. Actually the new Flexbox syntax is part of Firefox 20 and up, but until v22 arrives it's disabled by default. You can turn it on by heading to about:config and searching for layout.css.flexbox.enabled pref. Set it to true and the modern syntax will work.]

So, as of this writing, only two web browsers really support the new Flexbox syntax, though Firefox will make that three in the next month or so.

But there is a way to work around some of the issues. First off, check out Chris Coyier’s article on mixing the old and new syntaxes to get the widest possible browser support. Coyier’s methods will get your Flexbox layouts working in pretty much everything but IE 9 and below.

If you’re working on a personal site that might be okay — IE 9 and below would just get a simplified, linear layout. Or you could serve an extra stylesheet with some floats to older versions of IE (or use targeted CSS classes if you prefer). That defeats some of the benefits of Flexbox since you’ll be writing floats and the like for IE, but when usage drops off you can just dump that code and help move your site, and the web, forward.

0
Your rating: None