Skip navigation
Help

Cloud clients

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Florence Ion


What if you could privately use an application and manage its permissions to keep ill-intending apps from accessing your data? That’s exactly what Steve Kondik at CyanogenMod—the aftermarket, community-based firmware for Android devices—hopes to bring to the operating system. It’s called Incognito Mode, and it’s designed to help keep your personal data under control.

Kondik, a lead developer with the CyanogenMod team, published a post on his Google Plus profile last week about Incognito Mode. He offered more details on the feature:

I've added a per-application flag which is exposed via a simple API. This flag can be used by content providers to decide if they should return a full or limited dataset. In the implementation I'm working on, I am using the flag to provide these privacy features in the base system:

  • Return empty lists for contacts, calendar, browser history, and messages.
  • GPS will appear to always be disabled to the running application.
  • When an app is running incognito, a quick panel item is displayed in order to turn it off easily.
  • No fine-grained permissions controls as you saw in CM7. It's a single option available under application details.

The API provides a simple isIncognito() call which will tell you if incognito is enabled for the process (or the calling process). Third party applications can honor the feature using this API, or they can choose to display pictures of cats instead of running normally.

Every time you install a new application on Android, the operating system asks you to review the permissions the app requests before it can install. This approach to user data is certainly precarious because users can't deny individual permissions to pick and choose what an application has access to, even if they still want to use that app. Incognito Mode could potentially fix this conundrum, enabling users to restrict their data to certain applications.

Read 9 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Addy Osmani

  

Today we’ll discuss how to improve the paint performance of your websites and Web apps. This is an area that we Web developers have only recently started looking at more closely, and it’s important because it could have an impact on your user engagement and user experience.

Frame Rate Applies To The Web, Too

Frame rate is the rate at which a device produces consecutive images to the screen. A low frames per second (FPS) means that individual frames can be made out by the eye. A high FPS gives users a more responsive feel. You’re probably used to this concept from the world of gaming, but it applies to the Web, too.

Long image decoding, unnecessary image resizing, heavy animation and data processing can all lead to dropped frames, which reduces the frame rate, resulting in janky pages. We’ll explain what exactly we mean by “jank” shortly.

Why Care About Frame Rate?

Smooth, high frame rates drive user engagement and can affect how much users interact with your website or app.

At EdgeConf earlier this year, Facebook confirmed this when it mentioned that in an A/B test, it slowed down scrolling from 60 FPS to 30 FPS, causing engagement to collapse. That said, if you can’t do high frame rates and 60 FPS is out of reach, then you’d at least want something smooth. If you’re doing your own animation, this is one benefit of using requestAnimationFrame: the browser can dynamically adjust to keep the frame rate normal.

In cases where you’re concerned about scrolling, the browser can manage the frame rate for you. But if you introduce a large amount of jank, then it won’t be able to do as good a job. So, try to avoid big hitches, such as long paints, long JavaScript execution times, long anything.

Don’t Guess It, Test It!

Before getting started, we need to step back and look at our approach. We all want our websites and apps to run more quickly. In fact, we’re arguably paid to write code that runs not only correctly, but quickly. As busy developers with deadlines, we find it very easy to rely on snippets of advice that we’ve read or heard. Problems arise when we do that, though, because the internals of browsers change very rapidly, and something that’s slow today could be quick tomorrow.

Another point to remember is that your app or website is unique, and, therefore, the performance issues you face will depend heavily on what you’re building. Optimizing a game is a very different beast to optimizing an app that users will have open for 200+ hours. If it’s a game, then you’ll likely need to focus your attention on the main loop and heavily optimize the chunk of code that is going to run every frame. With a DOM-heavy application, the memory usage might be the biggest performance bottleneck.

Your best option is to learn how to measure your application and understand what the code is doing. That way, when browsers change, you will still be clear about what matters to you and your team and will be able to make informed decisions. So, no matter what, don’t guess it, test it!

We’re going to discuss how to measure frame rate and paint performance shortly, so hold onto your seats!

Note: Some of the tools mentioned in this article require Chrome Canary, with the “Developer Tools experiments” enabled in about:flags. (We — Addy Osmani and Paul Lewis — are engineers on the Developer Relations team at Chrome.)

Case Study: Pinterest

The other day we were on Pinterest, trying to find some ponies to add to our pony board (Addy loves ponies!). So, we went over to the Pinterest feed and started scrolling through, looking for some ponies to add.

Screen Shot 2013-03-25 at 14.30.57-500
Addy adding some ponies to his Pinterest board, as one does. Larger view.

Jank Affects User Experience

The first thing we noticed as we scrolled was that scrolling on this page doesn’t perform very well — scrolling up and down takes effort, and the experience just feels sluggish. When they come up against this, users get frustrated, which means they’re more likely to leave. Of course, this is the last thing we want them to do!

Screen Shot 2013-03-25 at 14.31.27-500
Pinterest showing a performance bottleneck when a user scrolls. Larger view.

This break in consistent frame rate is something the Chrome team calls “jank,” and we’re not sure what’s causing it here. You can actually notice some of the frames being drawn as we scroll. But let’s visualize it! We’re going to open up Frames mode and show what slow looks like there in just a moment.

Note: What we’re really looking for is a consistently high FPS, ideally matching the refresh rate of the screen. In many cases, this will be 60 FPS, but it’s not guaranteed, so check the devices you’re targeting.

Now, as JavaScript developers, our first instinct is to suspect a memory leak as being the cause. Perhaps some objects are being held around after a round of garbage collection. The reality, however, is that very often these days JavaScript is not a bottleneck. Our major performance problems come down to slow painting and rendering times. The DOM needs to be turned into pixels on the screen, and a lot of paint work when the user scrolls could result in a lot of slowing down.

Note: HTML5 Rocks specifically discusses some of the causes of slow scrolling. If you think you’re running into this problem, it’s worth a read.

Measuring Paint Performance

Frame Rate

We suspect that something on this page is affecting the frame rate. So, let’s go open up Chrome’s Developer Tools and head to the “Timeline” and “Frames” mode to record a new session. We’ll click the record button and start scrolling the page the way a normal user would. Now, to simulate a few minutes of usage, we’re going to scroll just a little faster.

Screen Shot 2013-05-15 at 17.57.48-500
Using Chrome’s Developer Tools to profile scrolling interactions. Larger view.

Up, down, up, down. What you’ll notice now in the summary view up at the top is a lot of purple and green, corresponding to painting and rendering times. Let’s stop recording for now. As we flip through these various frames, we see some pretty hefty “Recalculate Styles” and a lot of “Layout.”

If you look at the legend to the very right, you’ll see that we’ve actually blown our budget of 60 FPS, and we’re not even hitting 30 FPS either in many cases. It’s just performing quite poorly. Now, each of these bars in the summary view correspond to one frame — i.e. all of the work that Chrome has to do in order to be able to draw an app to the screen.

screen434343-500
Chrome’s Developer Tools showing a long paint time. Larger view.

Frame Budget

If you’re targeting 60 FPS, which is generally the optimal number of frames to target these days, then to match the refresh rate of the devices we commonly use, you’ll have a 16.7-millisecond budget in which to complete everything — JavaScript, layout, image decoding and resizing, painting, compositing — everything.

Note: A constant frame rate is our ideal here. If you can’t hit 60 FPS for whatever reason, then you’re likely better off targeting 30 FPS, rather than allowing a variable frame rate between 30 and 60 FPS. In practice, this can be challenging to code because when the JavaScript finishes executing, all of the layout, paint and compositing work still has to be done, and predicting that ahead of time is very difficult. In any case, whatever your frame rate, ensure that it is consistent and doesn’t fluctuate (which would appear as stuttering).

If you’re aiming for low-end devices, such as mobile phones, then that frame budget of 16 milliseconds is really more like 8 to 10 milliseconds. This could be true on desktop as well, where your frame budget might be lowered as a result of miscellaneous browser processes. If you blow this budget, you will miss frames and see jank on the page. So, you likely have somewhere nearer 8 to 10 milliseconds, but be sure to test the devices you’re supporting to get a realistic idea of your budget.

Screen Shot 2013-03-25 at 14.34.26-500
An extremely costly layout operation of over 500 milliseconds. Larger view.

Note: We’ve also got an article on how to use the Chrome Developer Tools to find and fix rendering performance issues that focuses more on the timeline.

Going back to scrolling, we have a sneaking suspicion that a number of unnecessary repaints are occurring on this page with onscroll.

One common mistake is to stuff just way too much JavaScript into the onscroll handlers of a page — making it difficult to meet the frame budget at all. Aligning the work to the rendering pipeline (for example, by placing it in requestAnimationFrame) gives you a little more headroom, but you still have only those few milliseconds in which to get everything done.

The best thing you can do is just capture values such as scrollTop in your scroll handlers, and then use the most recent value inside a requestAnimationFrame callback.

Paint Rectangles

Let’s go back to Developer Tools → Settings and enable “Show paint rectangles.” This visualizes the areas of the screen that are being painted with a nice red highlight. Now look at what happens as we scroll through Pinterest.

Screen Shot 2013-03-25 at 14.35.17-500
Enabling Chrome Developer Tools’ “Paint Rectangles” feature. Larger view.

Every few milliseconds, we experience a big bright flash of red across the entire screen. There seems to be a paint of the whole screen every time we scroll, which is potentially very expensive. What we want to see is the browser just painting what is new to the page — so, typically just the bottom or top of the page as it gets scrolled into view. The cause of this issue seems to be the little “scroll to top” button in the lower-right corner. As the user scrolls, the fixed header at the top needs to be repainted, but so does the button. The way that Chrome deals with this is to create a union of the two areas that need to be repainted.

Screen Shot 2013-05-15 at 19.00.12-500
Chrome shows freshly painted areas with a red box. Larger view.

In this case, there is a rectangle from the top left to top right, but not very tall, plus a rectangle in the lower-right corner. This leaves us with a rectangle from the top left to bottom right, which is essentially the whole screen! If you inspect the button element in Developer Tools and either hide it (using the H key) or delete it and then scroll again, you will see that only the header area is repainted. The way to solve this particular problem is to move the scroll button to its own layer so that it doesn’t get unioned with the header. This essentially isolates the button so that it can be composited on top of the rest of the page. But we’ll talk about layers and compositing in more detail in a little bit.

The next thing we notice has to do with hovering. When we hover over a pin, Pinterest paints an action bar containing “Repin, comment and like” buttons — let’s call this the action bar. When we hover over a single pin, it paints not just the bar but also the elements underlying it. Painting should happen only on those elements that you expect to change visually.

Screen Shot 2013-03-25 at 14.35.46-500
A cause for concern: full-screen flashes of red indicate a lot of painting. Larger view.

There’s another interesting thing about scrolling here. Let’s keep our cursor hovered over this pin and start scrolling the page again.

Every time we scroll through a new row of images, this action bar gets painted on yet another pin, even though we don’t mean to hover over it. This comes down more to UX than anything else, but scrolling performance in this case might be more important than the hover effect during scrolling. Hovering amplifies jank during scrolling because the browser essentially pauses to go off and paint the effect (the same is true when we roll out of the element!). One option here is to use a setTimeout with a delay to ensure that the bar is painted only when the user really intends to use it, an approach we covered in “Avoiding Unnecessary Paints.” A more aggressive approach would be to measure the mouseenter or the mouse’s trajectory before enabling hover behaviors. While this measure might seem rather extreme, remember that we are trying to avoid unnecessary paints at all costs, especially when the user is scrolling.

Overall Paint Cost

We now have a really great workflow for looking at the overall cost of painting on a page; go back into Developer Tools and “Enable continuous page repainting.” This feature will constantly paint to your screen so that you can find out what elements have costly paint times. You’ll get this really nice black box in the top corner that summarizes paint times, with the minimum and maximum also displayed.

screenshot43234242-500
Chrome’s “Continuous Page Repainting” mode helps you to assess the overall cost of a page. Larger view.

Let’s head back to the “Elements” panel. Here, we can select a node and just use the keyboard to walk the DOM tree. If we suspect that an element has an expensive paint, we can use the H shortcut key (something recently added to Chrome) to toggle visibility on that element. Using the continuous paint box, we can instantly see whether this has a positive effect on our pages’ paint times. We should expect it to in many cases, because if we hide an element, we should expect a corresponding reduction in paint times. But by doing this, we might see one element that is especially expensive, which would bear further scrutiny!

Screen Shot 2013-06-10 at 09.46.31_500_mini
The “Continuous Page Repainting” chart showing the time taken to paint the page.

For Pinterest’s website, we can do it to the categories bar or to the header, and, as you’d expect, because we don’t have to paint these elements at all, we see a drop in the time it takes to paint to the screen. If we want even more detailed insight, we can go right back to the timeline and record a new session to measure the impact. Isn’t that great? Now, while this workflow should work great for most pages, there might be times when it isn’t as useful. In Pinterest’s case, the pins are actually quite deeply nested in the page, making it hard for us to measure paint times in this workflow.

Luckily, we can still get some good mileage by selecting an element (such as a pin here), going to the “Styles” panel and looking at what CSS styles are being used. We can toggle properties on and off to see how they effect the paint times. This gives us much finer-grained insight into the paint profile of the page.

Here, we see that Pinterest is using box-shadow on these pins. We’ve optimized the performance of box-shadow in Chrome over the past two years, but in combination with other styles and when heavily used, it could cause a bottleneck, so it’s worth looking at.

Pinterest has reduced continuous paint mode times by 40% by moving box-shadow to a separate element that doesn’t have border-radius. The side effect is slightly fuzzy-looking corners; however, it is barely noticeable due to the color scheme and the low border-radius values.

Note: You can read more about this topic in “CSS Paint Times and Page Render Weight.”

Screen Shot 2013-03-25 at 15.47.40-500
Toggling styles to measure their effect on page-rendering weight. Larger view.

Let’s disable box-shadow to see whether it makes a difference. As you can see, it’s no longer visible on any of the pins. So, let’s go back to the timeline and record a new session in which we scroll the same way as we did before (up and down, up and down, up and down). We’re getting closer to 60 FPS now, and that’s just from one change.

Public service announcement: We’re absolutely not saying don’t use box-shadow — by all means, do! Just make sure that if you have a performance problem, measure correctly to find out what your own bottlenecks are. Always measure! Your website or application is unique, as will any performance bottleneck be. Browser internals change almost daily, so measuring is the smartest way to stay up to date on the changes, and Chrome’s Developer Tools makes this really easy to do.

Screen Shot 2013-03-25 at 15.50.25-500
Using Chrome Developer Tools to profile is the best way to track browser performance changes. Larger view.

Note: Eberhard Grather recently wrote a detailed post on “Profiling Long Paint Times With DevTools’ Continuous Painting Mode,” which you should spend some quality time with.

Another thing we noticed is that if you click on the “Repin” button, do you see the animated effect and the lightbox being painted? There’s a big red flash of repaint in the background. It’s not clear from the tooling if the paint is the white cover or some other affected being area. Be sure to double check that the paint rectangles correspond to the element or elements that you think are being repainted, and not just what it looks like. In this case, it looks like the whole screen is being repainted, but it could well be just the white cover, which might not be all that expensive. It’s nuanced; the important thing is to understand what you’re seeing and why.

Hardware Compositing (GPU Acceleration)

The last thing we’re going to look at on Pinterest is GPU acceleration. In the past, Web browsers have relied pretty heavily on the CPU to render pages. This involved two things: firstly, painting elements into a bunch of textures, called layers; and secondly, compositing all of those layers together to the final picture seen on screen.

Over the past few years, however, we’ve found that getting the GPU involved in the compositing process can lead to some significant speeding up. The premise is that, while the textures are still painted on the CPU, they can be uploaded to the GPU for compositing. Assuming that all we do on future frames is move elements around (using CSS transitions or animations) or change their opacity, we simply provide these changes to the GPU and it takes care of the rest. We essentially avoid having to give the GPU any new graphics; rather, we just ask it to move existing ones around. This is something that the GPU is exceptionally quick at doing, thus improving performance overall.

There is no guarantee that this hardware compositing will be available and enabled on a given platform, but if it is available the first time you use, say, a 3D transform on an element, then it will be enabled in Chrome. Many developers use the translateZ hack to do just that. The other side effect of using this hack is that the element in question will get its own layer, which may or may not be what you want. It can be very useful to effectively isolate an element so that it doesn’t affect others as and when it gets repainted. It’s worth remembering that the uploading of these textures from system memory to the video memory is not necessarily very quick. The more layers you have, the more textures need to be uploaded and the more layers that will need to be managed, so it’s best not to overdo it.

Note: Tom Wiltzius has written about the layer model in Chrome, which is a relevant read if you are interested in understanding how compositing works behind the scenes. Paul has also written a post about the translateZ hack and how to make sure you’re using it in the right ways.

Another great setting in Developer Tools that can help here is “Show composited layer borders.” This feature will give you insight into those DOM elements that are being manipulated at the GPU level.

layer_folders_addy_500_mini
Switching on composited layer borders will indicate Chrome’s rendering layers. Larger view.

If an element is taking advantage of the GPU acceleration, you’ll see an orange border around it with this on. Now as we scroll through, we don’t really see any use of composited layers on this page — not when we click “Scroll to top” or otherwise.

Chrome is getting better at automatically handling layer promotion in the background; but, as mentioned, developers sometimes use the translateZ hack to create a composited layer. Below is Pinterest’s feed with translateZ(0) applied to all pins. It’s not hitting 60 FPS, but it is getting closer to a consistent 30 FPS on desktop, which is actually not bad.

Screen Shot 2013-05-15 at 19.03.13-500
Using the translateZ(0) hack on all Pinterest pins. Note the orange borders. Larger view.

Remember to test on both desktop and mobile, though; their performance characteristics vary wildly. Use the timeline in both, and watch your paint time chart in Continuous Paint mode to evaluate how fast you’re busting your budget.

Again, don’t use this hack on every element on the page — it might pass muster on desktop, but it won’t on mobile. The reason is that there is increased video memory usage and an increased layer management cost, both of which could have a negative impact on performance. Instead, use hardware compositing only to isolate elements where the paint cost is measurably high.

Note: In the WebKit nightlies, the Web Inspector now also gives you the reasons for layers being composited. To enable this, switch off the “Use WebKit Web Inspector” option and you’ll get the front end with this feature in there. Switch it on using the “Layers” button.

A Find-and-Fix Workflow

Now that we’ve concluded our Pinterest case study, what about the workflow for diagnosing and addressing your own paint problems?

Finding the Problem

  • Make sure you’re in “Incognito” mode. Extensions and apps can skew the figures that are reported when profiling performance.
  • Open the page and the Developer Tools.
  • In the timeline, record and interact with your page.
  • Check for frames that go over budget (i.e. over 60 FPS).
  • If you’re close to budget, then you’re likely way over the budget on mobile.
  • Check the cause of the jank. Long paint? CSS layout? JavaScript?

Screen Shot 2013-05-15 at 19.36.22-500
Spend some quality time with Frame mode in Chrome Developer Tools to understand your website’s runtime profile. Larger view.

Fixing the Problem

  • Go to “Settings” and enable “Continuous Page Repainting.”
  • In the “Elements” panel, hide anything non-essential using the hide (H) shortcut.
  • Walk through the DOM tree, hiding elements and checking the FPS in the timeline.
  • See which element(s) are causing long paints.
  • Uncheck styles that could affect paint time, and track the FPS.
  • Continue until you’ve located the elements and styles responsible for the slow-down.

fixing-500_mini
Switch on extra Developer Tools features for more insight. Larger view.

What About Other Browsers?

Although at the time of writing, Chrome has the best tools to profile paint performance, we strongly recommend testing and measuring your pages in other browsers to get a feel for what your own users might experience (where feasible). Performance can vary massively between them, and a performance smell in one browser might not be present in another.

As we said earlier, don’t guess it, test it! Measure for yourself, understand the abstractions, know your browser’s internals. In time, we hope that the cross- browser tooling for this area improves so that developers can get an accurate picture of rendering performance, regardless of the browser being used.

Conclusion

Performance is important. Not all machines are created equal, and the fast machines that developers work on might not have the performance problems encountered on the devices of real users. Frame rate in particular can have a big impact on engagement and, consequently, on a project’s success. Luckily, a lot of great tools out there can help with that.

Be sure to measure paint performance on both desktop and mobile. If all goes well, your users will end up with snappier, more silky-smooth experiences, regardless of the device they’re using.

Further Reading

About the Authors

Addy Osmani and Paul Lewis are engineers on the Developer Relations team at Chrome, with a focus on tooling and rendering performance, respectively. When they’re not causing trouble, they have a passion for helping developers build snappy, fluid experiences on the Web.

(al)

© Addy Osmani for Smashing Magazine, 2013.

0
Your rating: None
Original author: 
By THE NEW YORK TIMES

With more people reading the Times on smart phones, you can now experience Lens on the New York Times iPad/iPhone or Android app.

0
Your rating: None
Original author: 
Dan Goodin

greyweed

Recently discovered malware targeting Android smartphones exploits previously unknown vulnerabilities in the Google operating system and borrows highly advanced functionality more typical of malicious Windows applications, making it the world's most sophisticated Android Trojan, a security researcher said.

The infection, named Backdoor.AndroidOS.Obad.a, isn't very widespread at the moment. The malware gives an idea of the types of smartphone malware that are possible, however, according to Kaspersky Lab expert Roman Unuchek in a blog post published Thursday. Sharply contrasting with mostly rudimentary Android malware circulating today, the highly stealthy Obad.a exploits previously unknown Android bugs, uses Bluetooth and Wi-Fi connections to spread to near-by handsets, and allows attackers to issue malicious commands using standard SMS text messages.

"To conclude this review, we would like to add that Backdoor.AndroidOS.Obad.a looks closer to Windows malware than to other Android trojans, in terms of its complexity and the number of unpublished vulnerabilities it exploits," Unuchek wrote. "This means that the complexity of Android malware programs is growing rapidly alongside their numbers."

Read 6 remaining paragraphs | Comments

0
Your rating: None
Original author: 
samzenpus

chicksdaddy writes "A new malicious program that runs on Android mobile devices exploits vulnerabilities in Google's mobile operating system to extend the application's permissions on the infected device, and to block attempts to remove the malicious application, The Security Ledger reports. The malware, dubbed Backdoor.AndroidOS.Obad.a, is described as a 'multi function Trojan.' Like most profit-oriented mobile malware, Obad is primarily an SMS Trojan, which surreptitiously sends short message service (SMS) messages to premium numbers. However, it is capable of downloading additional modules and of spreading via Bluetooth connections. Writing on the Securelist blog, malware researcher Roman Unuchek called the newly discovered Trojan the 'most sophisticated' malicious program yet for Android phones. He cited the Trojan's advanced features, including complex code obfuscation techniques that complicated analysis of the code, and the use of a previously unknown vulnerability in Android that allows Obad to elevate its privileges on infected devices and block removal."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None
Original author: 
Wilson Page

  

When the mockups for the new Financial Times application hit our desks in mid-2012, we knew we had a real challenge on our hands. Many of us on the team (including me) swore that parts of interface would not be possible in HTML5. Given the product team’s passion for the new UI, we rolled up our sleeves and gave it our best shot.

We were tasked with implementing a far more challenging product, without compromising the reliable, performant experience that made the first app so successful.

promo-500-compr

We didn’t just want to build a product that fulfilled its current requirements; we wanted to build a foundation that we could innovate on in the future. This meant building with a maintenance-first mentality, writing clean, well-commented code and, at the same time, ensuring that our code could accommodate the demands of an ever-changing feature set.

In this article, I’ll discuss some of the changes we made in the latest release and the decision-making behind them. I hope you will come away with some ideas and learn from our solutions as well as our mistakes.

Supported Devices

The first Financial Times Web app ran on iPad and iPhone in the browser, and it shipped in a native (PhoneGap-esque) application wrapper for Android and Windows 8 Metro devices. The latest Web app is currently being served to iPad devices only; but as support is built in and tested, it will be rolled out to all existing supported platforms. HTML5 gives developers the advantage of occupying almost any mobile platform. With 2013 promising the launch of several new Web application marketplaces (eg. Chrome Web Store and Mozilla Marketplace), we are excited by the possibilities that lie ahead for the mobile Web.

Fixed-Height Layouts

The first shock that came from the new mockups was that they were all fixed height. By “fixed height,” I mean that, unlike a conventional website, the height of the page is restricted to the height of the device’s viewport. If there is more content than there is screen space, overflow must be dealt with at a component level, as opposed to the page level. We wanted to use JavaScript only as a last resort, so the first tool that sprang to mind was flexbox. Flexbox gives developers the ability to declare flexible elements that can fill the available horizontal or vertical space, something that has been very tricky to do with CSS. Chris Coyier has a great introduction to flexbox.

Using Flexbox in Production

Flexbox has been around since 2009 and has great support on all the popular smartphones and tablets. We jumped at the chance to use flexbox when we found out how easily it could solve some of our complex layouts, and we started throwing it at every layout problem we faced. As the app began to grow, we found performance was getting worse and worse.

We spent a good few hours in Chrome Developers Tools’ timeline and found the culprit: Shock, horror! — it was our new best friend, flexbox. The timeline showed that some layouts were taking close to 100 milliseconds; reworking our layouts without flexbox reduced this to 10 milliseconds! This may not seem like a lot, but when swiping between sections, 90 milliseconds of unresponsiveness is very noticeable.

Back to the Old School

We had no other choice but to tear out flexbox wherever we could. We used 100% height, floats, negative margins, border-box sizing and padding to achieve the same layouts with much greater performance (albeit with more complex CSS). Flexbox is still used in some parts of the app. We found that its impact on performance was less expensive when used for small UI components.

layout-time-with-flexbox-500_comp
Page layout time with flexbox

layout-time-without-flexbox-500_comp
Page layout time without flexbox

Truncation

The content of a fixed-height layout will rarely fit its container; eventually it has to overflow. Traditionally in print, designers have used ellipses (three dots) to solve this problem; however, on the Web, this isn’t the simplest technique to implement.

Ellipsis

You might be familiar with the text-overflow: ellipsis declaration in CSS. It works great, has awesome browser support, but has one shortfall: it can’t be used for text that spans multiple lines. We needed a solution that would insert an ellipsis at the point where the paragraph overflows its container. JavaScript had to step in.

ellipsis-500_mini
Ellipsis truncation is used throughout.

After an in-depth research and exploration of several different approaches, we created our FTEllipsis library. In essence, it measures the available height of the container, then measures the height of each child element. When it finds the child element that overflows the container, it caps its height to a sensible number of lines. For WebKit-based browsers, we use the little-known -webkit-line-clamp property to truncate an element’s text by a set number of lines. For non-WebKit browsers, the library allows the developer to style the overflowing container however they wish using regular CSS.

Modularization

Having tackled some of the low-level visual challenges, we needed to step back and decide on the best way to manage our application’s views. We wanted to be able to reuse small parts of our views in different contexts and find a way to architect rock-solid styling that wouldn’t leak between components.

One of the best decisions we made in implementing the new application was to modularize the views. This started when we were first looking over the designs. We scribbled over printouts, breaking the page down into chunks (or modules). Our plan was to identify all of the possible layouts and modules, and define each view (or page) as a combination of modules sitting inside the slots of a single layout.

Each module needed to be named, but we found it very hard to describe a module, especially when some modules could have multiple appearances depending on screen size or context. As a result, we abandoned semantic naming and decided to name each component after a type of fruit — no more time wasted thinking up sensible, unambiguous names!

An example of a module’s markup:


<div class="apple">
  <h2 class="apple_headline">{{headline}}</h2>
  <h3 class="apple_sub-head">{{subhead}}</h3>
  <div class="apple_body">{{body}}</div>
</div>

An example of a module’s styling:


.apple {}

.apple_headline {
  font-size: 40px;
}

.apple_sub-head {
  font-size: 20px;
}

.apple_body {
  font-size: 14px;
  column-count: 2;
  color: #333;
}

Notice how each class is prefixed with the module’s name. This ensures that the styling for one component will never affect another; every module’s styling is encapsulated. Also, notice how we use just one class in our CSS selectors; this makes our component transportable. Ridding selectors of any ancestral context means that modules may be dropped anywhere in our application and will look the same. This is all imperative if we want to be able to reuse components throughout the application (and even across applications).

What If a Module Needs Interactions?

Each module (or fruit) has its own markup and style, which we wrote in such a way that it can be reused. But what if we need a module to respond to interactions or events? We need a way to bring the component to life, but still ensure that it is unbound from context so that it can be reused in different places. This is a little trickier that just writing smart markup and styling. To solve this problem, we wrote FruitMachine.

Reusable Components

FruitMachine is a lightweight library that assembles our layout’s components and enables us to declare interactions on a per-module basis. It was inspired by the simplicity of Backbone views, but with a little more structure to keep “boilerplate” code to a minimum. FruitMachine gives our team a consistent way to work with views, while at the same time remaining relatively unopinionated so that it can be used in almost any view.

The Component Mentality

Thinking about your application as a collection of standalone components changes the way you approach problems. Components need to be dumb; they can’t know anything of their context or of the consequences of any interactions that may occur within them. They can have a public API and should emit events when they are interacted with. An application-specific controller assembles each layout and is the brain behind everything. Its job is to create, control and listen to each component in the view.

For example, to show a popover when a component named “button” is clicked, we would not hardcode this logic into the button component. Instead “button” would emit a buttonclicked event on itself every time its button is clicked; the view controller would listen for this event and then show the popover. By working like this, we can create a large collection of components that can be reused in many different contexts. A view component may not have any application-specific dependencies if it is to be used across projects.

Working like this has simplified our architecture considerably. Breaking down our views into components and decoupling them from our application focuses our decision-making and moves us away from baking complex, heavily dependent modules into our application.

The Future of FruitMachine

FruitMachine was our solution to achieve fully transportable view components. It enables us to quickly define and assemble views with minimal effort. We are currently using FruitMachine only on the client, but server-side (NodeJS) usage has been considered throughout development. In the coming months, we hope to move towards producing server-side-rendered websites that progressively enhance into a rich app experience.

You can find out more about FruitMachine and check out some more examples in the public GitHub repository.

Retina Support

The Financial Times’ first Web app was released before the age of “Retina” screens. We retrofitted some high-resolution solutions, but never went the whole hog. For our designers, 100% Retina support was a must-have in the new application. We developers were sick of maintaining multiple sizes and resolutions of each tiny image within the UI, so a single vector-based solution seemed like the best approach. We ended up choosing icon fonts to replace our old PNGs, and because they are implemented just like any other custom font, they are really well supported. SVG graphics were considered, but after finding a lack of support in Android 2.3 and below, this option was ruled out. Plus, there is something nice about having all of your icons bundled up in a single file, whilst not sacrificing the individuality of each graphic (like sprites).

Our first move was to replace the Financial Times’ logo image with a single glyph in our own custom icon font. A font glyph may be any color and size, and it always looks super-sharp and is usually lighter in weight than the original image. Once we had proved it could work, we began replacing every UI image and icon with an icon font alternative. Now, the only pixel-based image in our CSS is the full-color logo on the splash screen. We used the powerful but rather archaic-looking FontForge to achieve this.

Once past the installation phase, you can open any font file in FontForge and individually change the vector shape of any character. We imported SVG vector shapes (created in Adobe Illustrator) into suitable character slots of our font and exported as WOFF and TTF font types. A combination of WOFF and TTF file formats are required to support iOS, Android and Windows devices, although we hope to rely only on WOFFs once Android gains support (plus, WOFFs are around 25% smaller in file size than TTFs).

icon-font-500-compr
The Financial Times’ icon font in Font Forge

Images

Article images are crucial for user engagement. Our images are delivered as double-resolution JPEGs so that they look sharp on Retina screens. Our image service (running ImageMagick) outputs JPEGs at the lowest possible quality level without causing noticeable degradation (we use 35 for Retina devices and 70 for non-Retina). Scaling down retina size images in the browser enables us to reduce JPEG quality to a lower level than would otherwise be possible without compression artifacts becoming noticeable. This article explains this technique in more detail.

It’s worth noting that this technique does require the browser to work a little harder. In old browsers, the work of scaling down many large images could have a noticeable impact on performance, but we haven’t encountered any serious problems.

Native-Like Scrolling

Like almost any application, we require full-page and subcomponent scrolling in order to manage all of the content we want to show our users. On desktop, we can make use of the well-established overflow CSS property. When dealing with the mobile Web, this isn’t so straightforward. We require a single solution that provides a “momentum” scrolling experience across all of the devices we support.

overflow: scroll

The overflow: scroll declaration is becoming usable on the mobile Web. Android and iOS now support it, but only since Android 3.0 and iOS 5. IOS 5 came with the exciting new -webkit-overflow-scrolling: touch property, which allows for native momentum-like scrolling in the browser. Both of these options have their limitations.

Standard overflow: scroll and overflow: auto don’t display scroll bars as users might expect, and they don’t have the momentum touch-scrolling feel that users have become accustomed to from their native apps. The -webkit-overflow-scrolling: touch declaration does add momentum scrolling and scroll bars, but it doesn’t allow developers to style the scroll bars in any way, and has limited support (iOS 5+ and Chrome on Android).

A Consistent Experience

Fragmented support and an inconsistent feel forced us to turn to JavaScript. Our first implementation used the TouchScroll library. This solution met our needs, but as our list of supported devices grew and as more complex scrolling interactions were required, working with it became trickier. TouchScroll lacks IE 10 support, and its API interface is difficult to work with. We also tried Scrollability and Zynga Scroller, neither of which have the features, performance or cross-browser capability we were looking for. Out of this problem, FTScroller was developed: a high-performance, momentum-scrolling library with support for iOS, Android, Playbook and IE 10.

FTScroller

FTScroller’s scrolling implementation is similar to TouchScroll’s, with a flexible API much like Zynga Scroller. We added some enhancements, such as CSS bezier curves for bouncing, requestAnimationFrame for smoother frame rates, and support for IE 10. The advantage of writing our own solution is that we could develop a product that exactly meets our requirements. When you know the code base inside out, fixing bugs and adding features is a lot simpler.

FTScroller is dead simple to use. Just pass in the element that will wrap the overflowing content, and FTScroller will implement horizontal or vertical scrolling as and when needed. Many other options may be declared in an object as the second argument, for more custom requirements. We use FTScroller throughout the Financial Times’ Web app for a consistent cross-platform scrolling experience.

A simple example:


var container = document.getElementById('scrollcontainer');
var scroller = new FTScroller(container);

The Gallery

The part of our application that holds and animates the page views is known as the “gallery.” It consists of three divisions: left, center and right. The page that is currently in view is located in the center pane. The previous page is positioned off screen in the left-hand pane, and the next page is positioned off screen in the right-hand pane. When the user swipes to the next page, we use CSS transitions to animate the three panes to the left, revealing the hidden right pane. When the transition has finished, the right pane becomes the center pane, and the far-left pane skips over to become the right pane. By using only three page containers, we keep the DOM light, while still creating the illusion of infinite pages.

Web
Infinite scrolling made possible with a three-pane gallery

Making It All Work Offline

Not many Web apps currently offer an offline experience, and there’s a good reason for that: implementing it is a bloody pain! The application cache (AppCache) at first glance appears to be the answer to all offline problems, but dig a little deeper and stuff gets nasty. Talks by Andrew Betts and Jake Archibald explain really well the problems you will encounter. Unfortunately, AppCache is currently the only way to achieve offline support, so we have to work around its many deficiencies.

Our approach to offline is to store as little in the AppCache as possible. We use it for fonts, the favicon and one or two UI images — things that we know will rarely or never need updating. Our JavaScript, CSS and templates live in LocalStorage. This approach gives us complete control over serving and updating the most crucial parts of our application. When the application starts, the bare minimum required to get the app up and running is sent down the wire, embedded in a single HTML page; we call this the preload.

We show a splash screen, and behind the scenes we make a request for the application’s full resources. This request returns a big JSON object containing our JavaScript, CSS and Mustache templates. We eval the JavaScript and inject the CSS into the DOM, and then the application launches. This “bootstrap” JSON is then stored in LocalStorage, ready to be used when the app is next started up.

On subsequent startups, we always use the JSON from LocalStorage and then check for resource updates in the background. If an update is found, we download the latest JSON object and replace the existing one in LocalStorage. Then, the next time the app starts, it launches with the new assets. If the app is launched offline, the startup process is the same, except that we cannot make the request for resource updates.

Images

Managing offline images is currently not as easy as it should be. Our image requests are run through a custom image loader and cached in the local database (IndexedDB or WebSQL) so that the images can be loaded when a network connection is not present. We never load images in the conventional way, otherwise they would break when users are offline.

Our image-loading process:

  1. The loader scans the page for image placeholders declared by a particular class.
  2. It takes the src attribute of each image placeholder found and requests the source from our JavaScript image-loader library.
  3. The local database is checked for each image. Failing that, a single HTTP request is made listing all missing images.
  4. A JSON array of Base64-encoded images is returned from the HTTP response and stored separately in the local database.
  5. A callback is fired for each image request, passing the Base64 string as an argument.
  6. An <img> element is created, and its src attribute is set to the Base64 data-URI string.
  7. The image is faded in.

I should also mention that we compress our Base64-encoded image strings in order to fit as many images in the database as possible. My colleague Andrew Betts goes into detail on how this can be achieved.

In some cases, we use this cool trick to handle images that fail to load:


<img src="image.jpg" onerror="this.style.display='none';" />

Ever-Evolving Applications

In order to stay competitive, a digital product needs to evolve, and as developers, we need to be prepared for this. When the request for a redesign landed at the Financial Times, we already had a fast, popular, feature-rich application, but it wasn’t built for change. At the time, we were able to implement small changes to features, but implementing anything big became a slow process and often introduced a lot of unrelated regressions.

Our application was drastically reworked to make the new requirements possible, and this took a lot of time. Having made this investment, we hope the new application not only meets (and even exceeds) the standard of the first product, but gives us a platform on which we can develop faster and more flexibly in the future.

(al)

© Wilson Page for Smashing Magazine, 2013.

0
Your rating: None
Original author: 
GoogleDevelopers


The Modern Workflow for Developing the Mobile Web - Google I/O 2013

Matt Gaunt Building for today's mobile web, getting 60fps across all target devices, while still delivering a fantastic user experience is a huge challenge. ...
From:
GoogleDevelopers
Views:
3483

54
ratings
Time:
30:33
More in
Science & Technology

0
Your rating: None
Original author: 
Florence Ion

We spent the week at I/O sitting in sessions, walking around the show floor, and congregating with developers. After the keynote, things got quieter on the news front but there was still plenty to learn about. This conference is about community, bringing together developers of all types, and connecting people with similar interests and backgrounds. It's also about adorable little Androids, which absolutely overwhelmed downtown San Francisco's convention center, the Moscone Center.


The Google Store

A Google Store employee models the Android Superhero costume, available for a mere $32.80. There was no word on compatibility with the YouTube Socks.

Sean Gallagher

14 more images in gallery

Read on Ars Technica | Comments

0
Your rating: None
Original author: 
GoogleDevelopers


Google I/O 2013 - Android Design for UI Developers

Nick Butcher, Roman Nurik Design on Android is no longer a complex mystery of disjointed patterns; the Android design guidelines have paved the way for a des...
From:
GoogleDevelopers
Views:
5414

229
ratings
Time:
40:14
More in
Science & Technology

0
Your rating: None