Skip navigation
Help

Tablet computer

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Jon Brodkin


Ubuntu 13.04.

The stable release of Ubuntu 13.04 became available for download today, with Canonical promising performance and graphical improvements to help prepare the operating system for convergence across PCs, phones, and tablets.

"Performance on lightweight systems was a core focus for this cycle, as a prelude to Ubuntu’s release on a range of mobile form factors," Canonical said in an announcement today. "As a result 13.04 delivers significantly faster response times in casual use, and a reduced memory footprint that benefits all users."

Named "Raring Ringtail,"—the prelude to Saucy Salamander—Ubuntu 13.04 is the midway point in the OS' two-year development cycle. Ubuntu 12.04, the more stable, Long Term Support edition that is supported for five years, was released one year ago. Security updates are only promised for 9 months for interim releases like 13.04. Support windows for interim releases were recently cut from 18 months to 9 months to reduce the number of versions Ubuntu developers must support and let them focus on bigger and better things.

Read 11 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Andrew Cunningham

Aurich Lawson / Thinkstock

Welcome back to our three-part series on touchscreen technology. Last time, Florence Ion walked you through the technology's past, from the invention of the first touchscreens in the 1960s all the way up through the mid-2000s. During this period, different versions of the technology appeared in everything from PCs to early cell phones to personal digital assistants like Apple's Newton and the Palm Pilot. But all of these gadgets proved to be little more than a tease, a prelude to the main event. In this second part in our series, we'll be talking about touchscreens in the here-and-now.

When you think about touchscreens today, you probably think about smartphones and tablets, and for good reason. The 2007 introduction of the iPhone kicked off a transformation that turned a couple of niche products—smartphones and tablets—into billion-dollar industries. The current fierce competition from software like Android and Windows Phone (as well as hardware makers like Samsung and a host of others) means that new products are being introduced at a frantic pace.

The screens themselves are just one of the driving forces that makes these devices possible (and successful). Ever-smaller, ever-faster chips allow a phone to do things only a heavy-duty desktop could do just a decade or so ago, something we've discussed in detail elsewhere. The software that powers these devices is more important, though. Where older tablets and PDAs required a stylus or interaction with a cramped physical keyboard or trackball to use, mobile software has adapted to be better suited to humans' native pointing device—the larger, clumsier, but much more convenient finger.

Read 22 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Scott Gilbertson

Hybrids. Image: Screenshot/Webmonkey.

The advent of hybrid laptops that double as tablets or offer some sort of touch input has greatly complicated the life of web developers.

A big part of developing for today’s myriad screens is knowing when to adjust the interface, based not just on screen size, but other details like input device. Fingers are far less precise than a mouse, which means bigger buttons, form fields and other input areas.

But with hybrid devices like touch screen Windows 8 laptops or dockable Android tablets with keyboards, how do you know whether the user is browsing with a mouse or a finger?

Over on the Mozilla Hacks blog Patrick Lauke tackles that question in an article on detecting touch-capable devices. Lauke covers the relatively simple case of touch-only, like iOS devices, before diving into the far more complex problem of hybrid devices.

Lauke’s answer? If developing for the web hasn’t already taught you this lesson, perhaps hybrid devices will — learn to live with uncertainty and accept that you can’t control everything.

What’s the solution to this new conundrum of touch-capable devices that may also have other input methods? While some developers have started to look at complementing a touch feature detection with additional user agent sniffing, I believe that the answer – as in so many other cases in web development – is to accept that we can’t fully detect or control how our users will interact with our web sites and applications, and to be input-agnostic. Instead of making assumptions, our code should cater for all eventualities.

While learning to live with uncertainty and providing interfaces that work with any input sounds nice in theory, developers are bound to want something a bit more concrete. There’s some hope on the horizon. Microsoft has proposed the Pointer Events spec (and created a build of Webkit that supports it). And the CSS Media Queries Level 4 spec will offer a pointer query to see what sort of input device is being used (mouse, finger, stylus etc).

Unfortunately, neither Pointer Events nor Media Queries Level 4 are supported in today’s browsers. Eventually there probably will be some way to easily detect and know for certain which input device is being used, but for the time being you’re going to have to live with some level of uncertainty. Be sure to read through Lauke’s post for more details and some sample code.

0
Your rating: None
Original author: 
Florence Ion

Aurich Lawson / Thinkstock

It's hard to believe that just a few decades ago, touchscreen technology could only be found in science fiction books and film. These days, it's almost unfathomable how we once got through our daily tasks without a trusty tablet or smartphone nearby, but it doesn't stop there. Touchscreens really are everywhere. Homes, cars, restaurants, stores, planes, wherever—they fill our lives in spaces public and private.

It took generations and several major technological advancements for touchscreens to achieve this kind of presence. Although the underlying technology behind touchscreens can be traced back to the 1940s, there's plenty of evidence that suggests touchscreens weren't feasible until at least 1965. Popular science fiction television shows like Star Trek didn't even refer to the technology until Star Trek: The Next Generation debuted in 1987, almost two decades after touchscreen technology was even deemed possible. But their inclusion in the series paralleled the advancements in the technology world, and by the late 1980s, touchscreens finally appeared to be realistic enough that consumers could actually employ the technology into their own homes. 

This article is the first of a three-part series on touchscreen technology's journey to fact from fiction. The first three decades of touch are important to reflect upon in order to really appreciate the multitouch technology we're so used to having today. Today, we'll look at when these technologies first arose and who introduced them, plus we'll discuss several other pioneers who played a big role in advancing touch. Future entries in this series will study how the changes in touch displays led to essential devices for our lives today and where the technology might take us in the future. But first, let's put finger to screen and travel to the 1960s.

Read 51 remaining paragraphs | Comments

0
Your rating: None


Bezos' "remote display" patent envisions tablets and e-readers that are just screens—power and processing is provided wirelessly by a central system.

US Patent & Trademark Office

It seems like everyone is trying to jump on the cloud computing bandwagon, but Amazon Chairman and CEO Jeff Bezos wants to take it to a whole new level. GeekWire reports that he and Gregory Hart have filed a patent for "remote displays" that would get data and power from a centrally located "primary station." The tablets or e-readers would simply be screens, and the need for a large internal battery or significant local processing power would theoretically be obviated by the primary station.

The patent sees processors and large internal batteries as the next major roadblocks in the pursuit of thinner and lighter devices. "The ability to continue to reduce the form factor of many of today's devices is somewhat limited, however, as the devices typically include components such as processors and batteries that limit the minimum size and weight of the device. While the size of a battery is continuously getting smaller, the operational or functional time of these smaller batteries is often insufficient for many users."

The full patent is an interesting read, since it presents other potential use cases for these "remote displays" that wouldn't necessarily need to wait on this theoretical fully wireless future-tablet to come to pass. For example: a camera or sensor could detect when a hand is passed over an e-reader display and respond by turning the page. A touch-sensitive casing could detect when a child is handling a display by measuring things like the length and width of their fingers and then disable purchasing of new content or the ability to access "inappropriate" content.

Read 1 remaining paragraphs | Comments

0
Your rating: None

Nerval's Lobster writes "At a SXSW panel titled, 'Android's Principles for Designing the Future,' Helena Roeber (who headed up Android's UX research from 2007 through 2012) and Rachel Garb (who leads interaction design for Android apps at Google) discussed the complex philosophy behind Android's design. Roeber went back to the very beginning, recounting Google's Android Baseline Study, in which the team made in-home visits to study how people use technology. 'We saw the profound effect that technological design has on people's lives,' she said. 'Technology had become so pervasive that people had started to schedule and enforce deliberate offline moments to spend time with their family and friends.' From that study, the team learned that users were often overwhelmed by their options and 'limitless flexibility,' leading them to consider how to design a mobile operating system that wouldn't beat those users over the head (at least in the proverbial sense) on a minute-by-minute basis. Instead, they focused on an interface capable of serving features to users only when needed. That meant creating an interface that only interrupts users when needed; that does the 'heavy lifting' of the user's tasks and scheduling; that emphasizes 'real objects' over buttons and menus; and that offers lots of chances for customization. All those elements— and many more — eventually ended up in Android's trio of design principles: 'Enchant Me, Simplify My Life, and Make Me Amazing.'"

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

At last year's RSA security conference, we ran into the Pwnie Plug. The company has just come out with a new take on the same basic idea of pen-testing devices based on commodity hardware. Reader puddingebola writes with an excerpt from Wired: "The folks at security tools company Pwnie Express have built a tablet that can bash the heck out of corporate networks. Called the Pwn Pad, it's a full-fledged hacking toolkit built atop Google's Android operating system. Some important hacking tools have already been ported to Android, but Pwnie Express says that they've added some new ones. Most importantly, this is the first time that they've been able to get popular wireless hacking tools like Aircrack-ng and Kismet to work on an Android device." Pwnie Express will be back at RSA and so will Slashdot, so there's a good chance we'll get a close-up look at the new device, which runs about $800.

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

You have five minutes while waiting for a friend to meet you for lunch, so you find yourself shopping for a new pair of shoes. When your friend arrives, you put the phone away, but leave the web page open to help you remember what you found when you get home.

While you’re at work, you read a restaurant review for a new place you think sounds tasty. Come dinnertime, you grab your phone to pull up the address and location.

One night on your tablet, you’re browsing articles for a report you’re writing at work. Back at your desk the next day, you struggle in vain to remember what you searched for to find those articles. Why can’t you find them again?

Sound familiar? If you’re like most people, it probably does. Research from Google (PDF) shows that 90 percent of people start a task using one device, then pick it up later on another device—most commonly, people start a task on smartphone, and then complete it on the desktop. As you might expect, people regularly do this kind of device switching for the most common activities, like browsing the internet (81 percent) or social networking (72 percent). Certain categories like retail (67 percent), financial services (46 percent), and travel (43 percent) also seem to support this kind of sequential use of different devices.

Dual-screen or multi-screen use of devices gets a lot of attention, but we tend to focus on simultaneous usage—say, using tablets or smartphones while watching TV. Publishers, advertisers, and social networks are all actively trying to figure out how to deliver a good experience to users as they shift their attention between two screens at the same time. Sequential usage is every bit as common, but we rarely acknowledge this behavior or try to optimize for this experience.

When people start a task on one device and then complete it on another, they don’t want different content or less content, tailored for the device. They want the same content, presented so they can find it, navigate it, and read it. They imagine that their devices are different-sized windows on the same content, not entirely different containers.

What should we do to provide a good experience for users who want to complete the same task across more than one device?

Content parity

Let’s make device-switching the final nail in the coffin for the argument that mobile websites should offer a subset of the content on the “real” website. Everyone’s had the frustrating experience of trying to find content they’ve seen on the desktop that isn’t accessible from a phone. But the reverse is also a problem: users who start a task from a smartphone during a bit of free time shouldn’t be cut off from options they’d find back at their desktop.

Consistent navigation labels

When picking up a task on a second device, about half of users say they navigate directly to the website to find the desired information again. Users who are trying to locate the same information across a mobile site (or app) and a desktop site can’t rely on the same visual and spatial cues to help them find what they’re looking for. As much as possible, make it easy for them by keeping navigation categories and hierarchy exactly the same. There aren’t that many cases where we truly need to provide different navigation options on mobile. Most desktop navigation systems have been extensively tested—we know those categories and labels work, so keep them consistent.

Consistent search

About 60 percent of users say they’d use search to continue a task on another device. Businesses wondering whether “mobile SEO” is necessary should keep in mind that user tasks and goals don’t necessarily change based on the device—in fact, it’s often the identical user searching for the exact information that very same day. It’s frustrating to get totally different results from different devices when you know what you’re looking for.

Handy tools

Users have taught themselves tricks to make their transition between devices go more smoothly—about half of users report that they send themselves a link. Sites that don’t offer consistent URLs are guaranteed to frustrate users, sending them off on a quest to figure out where that link lives. Responsive design would solve this problem, but so would tools that explicitly allow users to save their progress when logged in, or email a link to the desktop or mobile version of a page.

Improved analytics

Mobile analytics is still in the dark ages. Tracking users between devices is challenging—or impossible—which means businesses don’t have a clear picture of how this kind of multi-device usage is affecting their sales. While true multi-channel analytics may be a ways off, organizations can’t afford to ignore this behavior. Don’t wait for more data to “prove” that customers are moving between devices to complete a task. Customers are already doing it.

It’s time to stop imagining that smartphones, tablets, and desktops are containers that each hold their own content, optimized for a particular browsing or reading experience. Users don’t think of it that way. Instead, users imagine that each device is its own window onto the web.

0
Your rating: None

Ubuntu phone

As software launches go, yesterday's announcement of Ubuntu for phones was quite the success for parent company Canonical. Having already promised to deliver their Linux operating system to mobile platforms, Ubuntu's makers weren't really breaking any new ground, yet their small-scale event stirred imaginations and conversations among mobile phone users. Perhaps it's a sign of our growing discontent with the iOS-Android duopoly that has gripped the market, or maybe it's a symptom of Ubuntu's own popularity as the leading Linux OS on the desktop, but the Ubuntu phone has quickly become a lightning rod for refreshed discourse on the future of mobile software.

It's a shame, then, that it appears to be tracking a terminal trajectory into...

Continue reading…

0
Your rating: None