Skip navigation
Help

Mobile

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Lars Kappert

  

We are talking and reading a lot about responsive Web design (RWD) these days, but very little attention is given to Web applications. Admittedly, RWD still has to be ironed out. But many of us believe it to be a strong concept, and it is here to stay. So, why don’t we extend this topic to HTML5-powered applications? Because responsive Web applications (RWAs) are both a huge opportunity and a big challenge, I wanted to dive in.

Building a RWA is more feasible than you might think. In this article, we will explore ideas and solutions. In the first part, we will set up some important concepts. We will build on these in the second part to actually develop a RWA, and then explore how scalable and portable this approach is.

Part 1: Becoming Responsible

Some Lessons Learned

It’s not easy to admit, but recently it has become more and more apparent that we don’t know many things about users of our websites. Varying screen sizes, device features and input mechanisms are pretty much RWD’s reasons for existence.

From the lessons we’ve learned so far, we mustn’t assume too much. For instance, a small screen is not necessarily a touch device. A mobile device could be over 1280 pixels wide. And a desktop could have a slow connection. We just don’t know. And that’s fine. This means we can focus on these things separately without making assumptions: that’s what responsiveness is all about.

Progressive Enhancement

The “JavaScript-enabled” debate is so ’90s. We need to optimize for accessibility and indexability (i.e. SEO) anyway. Claiming that JavaScript is required for Web apps and, thus, that there is no real need to pre-render HTML is fair (because SEO is usually not or less important for apps). But because we are going responsive, we will inherently pay a lot attention to mobile and, thus, to performance as well. This is why we are betting heavily on progressive enhancement.

Responsive Web Design

RWD has mostly to do with not knowing the screen’s width. We have multiple tools to work with, such as media queries, relative units and responsive images. No matter how wonderful RWD is conceptually, some technical issues still need to be solved.

start-image_mini
Not many big websites have gone truly responsive since The Boston Globe. (Image credits: Antoine Lefeuvre)

Client-Side Solutions

In the end, RWD is mostly about client-side solutions. Assuming that the server basically sends the same initial document and resources (images, CSS and JavaScript) to every device, any responsive measures will be taken on the client, such as:

  • applying specific styles through media queries;
  • using (i.e. polyfilling) <picture> or @srcset to get responsive images;
  • loading additional content.

Some of the issues surrounding RWD today are the following:

  • Responsive images haven’t been standardized.
  • Devices still load the CSS behind media queries that they never use.
  • We lack (browser-supported) responsive layout systems (think flexbox, grid, regions, template).
  • We lack element queries.

Server-Side Solutions: Responsive Content

Imagine that these challenges (such as images not being responsive and CSS loading unnecessarily) were solved on all devices and in all browsers, and that we didn’t have to resort to hacks or polyfills in the client. This would transfer some of the load from the client to the server (for instance, the CMS would have more control over responsive images).

But we would still face the issue of responsive content. Although many believe that the constraints of mobile help us to focus, to write better content and to build better designs, sometimes it’s simply not enough. This is where server-side solutions such as RESS and HTTP Client Hints come in. Basically, by knowing the device’s constraints and features up front, we can serve a different and optimized template to it.

Assuming we want to COPE, DRY and KISS and stuff, I think it comes down to where you want to draw the line here: the more important that performance and content tailored to each device is, the more necessary server-side assistance becomes. But we also have to bet on user-agent detection and on content negation. I’d say that this is a big threshold, but your mileage may vary. In any case, I can see content-focused websites getting there sooner than Web apps.

Having said that, I am focusing on RWAs in this article without resorting to server-side solutions.

Responsive Behavior

RWD is clearly about layout and design, but we will also have to focus on responsive behavior. It is what makes applications different from websites. Fluid grids and responsive images are great, but once we start talking about Web applications, we also have to be responsive in loading modules according to screen size or device capability (i.e. pretty much media queries for JavaScript).

For instance, an application might require GPS to be usable. Or it might contain a large interactive table that just doesn’t cut it on a small screen. And we simply can’t set display: none on all of these things, nor can we build everything twice.

We clearly need more.

Part 2: Building RWAs

To quickly recap, our fundamental concepts are:

  • progressive enhancement,
  • responsive design,
  • responsive behavior.

Fully armed, we will now look into a way to build responsive, context-aware applications. We’ll do this by declaratively specifying modules, conditions for loading modules, and extended modules or variants, based on feature detection and media queries. Then, we’ll dig deeper into the mechanics of dependency injection to see how all of this can be implemented.

Declarative Module Injection

We’ll start off by applying the concepts of progressive enhancement and mobile first, and create a common set of HTML, CSS and JavaScript for all devices. Later, we’ll progressively enhance the application based on content, screen size, device features, etc. The foundation is always plain HTML. Consider this fragment:


<div data-module="myModule">
    <p>Pre-rendered content</p>
</div>

Let’s assume we have some logic to query the data-module attribute in our document, to load up the referenced application module (myModule) and then to attach it to that element. Basically, we would be adding behavior that targets a particular fragment in the document.

This is our first step in making a Web application responsive: progressive module injection. Also, note that we could easily attach multiple modules to a single page in this way.

Conditional Module Injection

Sometimes we want to load a module only if a certain condition is met — for instance, when the device has a particular feature, such as touch or GPS:


<div data-module="find/my/dog" data-condition="gps">
    <p>Pre-rendered fallback content if GPS is unavailable.</p>
</div>

This will load the find/my/dog module only if the geolocation API is available.

Note: For the smallest footprint possible, we’ll simply use our own feature detection for now. (Really, we’re just checking for 'geolocation' in navigator.) Later, we might need more robust detection and so delegate this task to a tool such as Modernizr or Has.js (and possibly PhoneGap in hybrid mode).

Extended Module Injection

What if we want to load variants of a module based on media queries? Take this syntax:


<div data-module="myModule" data-variant="large">
    <p>Pre-rendered content</p>
</div>

This will load myModule on small screens and myModule/large on large screens.

For brevity, this single attribute contains the condition and the location of the variant (by convention). Programmatically, you could go mobile first and have the latter extend from the former (or separated modules, or even the other way around). This can be decided case by case.

Media Queries

Of course, we couldn’t call this responsive if it wasn’t actually driven by media queries. Consider this CSS:


@media all and (min-width: 45em) {
	body:after {
		content: 'large';
		display: none;
	}
}

Then, from JavaScript this value can be read:


var size = window.getComputedStyle(document.body,':after').getPropertyValue('content');

And this is why we can decide to load the myModule/large module from the last example if size === "large", and load myModule otherwise. Being able to conditionally not load a module at all is useful, too:


<div data-module="myModule" data-condition="!small">
    <p>Pre-rendered content</p>
</div>

There might be cases for media queries inside module declarations:


<div data-module="myModule" data-matchMedia="min-width: 800px">
    <p>Pre-rendered content</p>
</div>

Here we can use the window.matchMedia() API (a polyfill is available). I normally wouldn’t recommend doing this because it’s not very maintainable. Following breakpoints as set in CSS seems logical (because page layout probably dictates which modules to show or hide anyway). But obviously it depends on the situation. Targeted element queries may also prove useful:


<div data-module="myModule" data-matchMediaElement="(min-width: 600px)"></div>

Please note that the names of the attributes used here represent only an example, a basic implementation. They’re supposed to clarify the idea. In a real-world scenario, it might be wise to, for example, namespace the attributes, to allow for multiple modules and/or conditions, and so on.

Device Orientation

Take special care with device orientation. We don’t want to load a different module when the device is rotated. So, the module itself should be responsive, and the page’s layout might need to accommodate for this.

Connecting The Dots

The concept of responsive behavior allows for a great deal of flexibility in how applications are designed and built. We will now look into where those “modules” come in, how they relate to application structure, and how this module injection might actually work.

Applications and Modules

We can think of a client-side application as a group of application modules that are built with low-level modules. As an example, we might have User and Message models and a MessageDetail view to compose an Inbox application module, which is part of an entire email client application. The details of implementation, such as the module format to be used (for example, AMD, CommonJS or the “revealing module” pattern), are not important here. Also, defining things this way doesn’t mean we can’t have a bunch of mini-apps on a single page. On the other hand, I have found this approach to scale well to applications of any size.

A Common Scenario

An approach I see a lot is to put something like <div id="container"> in the HTML, and then load a bunch of JavaScript that uses that element as a hook to append layouts or views. For a single application on a single page, this works fine, but in my experience it doesn’t scale well:

  • Application modules are not very reusable because they rely on a particular element to be present.
  • When multiple applications or application modules are to be instantiated on a single page, they all need their own particular element, further increasing complexity.

To solve these issues, instead of letting application modules control themselves, what about making them more reusable by providing the element they should attach to? Additionally, we don’t need to know which modules must be loaded up front; we will do that dynamically. Let’s see how things come together using powerful patterns such as Dependency Injection (DI) and Inversion of Control (IOC).

Dependency Injection

You might have wondered how myModule actually gets loaded and instantiated.

Loading the dependency is pretty easy. For instance, take the string from the data-module attribute (myModule), and have a module loader fetch the myModule.js script.

Let’s assume we are using AMD or CommonJS (either of which I highly recommended) and that the module exports something (say, its public API). Let’s also assume that this is some kind of constructor that can be instantiated. We don’t know how to instantiate it because we don’t know exactly what it is up front. Should we instantiate it using new? What arguments should be passed? Is it a native JavaScript constructor function or a Backbone view or something completely different? Can we make sure the module attaches itself to the DOM element that we provide it with?

We have a couple of possible approaches here. A simple one is to always expect the same exported value — such as a Backbone view. It’s simple but might be enough. It would come down to this (using AMD and a Backbone view):


var moduleNode = document.querySelector('[data-module]'),
    moduleName = node.getAttribute('data-module');

require([moduleName], function(MyBackBoneView) {
    new MyBackBoneView({
        el: moduleNode
    });
})

That’s the gist of it. It works fine, but there are even better ways to apply this pattern of dependency injection.

IOC Containers

Let’s take a library such as the excellent wire.js library by cujoJS. An important concept in wire.js is “wire specs,” which essentially are IOC containers. It performs the actual instantiation of the application modules based on a declarative specification. Going this route, the data-module should reference a wire spec (instead of a module) that describes what module to load and how to instantiate it, allowing for practically any type of module. Now, all we need to do is pass the reference to the spec and the viewNode to wire.js. We can simply define this:


wire([specName, { viewNode: moduleNode }]);

Much better. We let wire.js do all of the hard work. Besides, wire has a ton of other features.

In summary, we can say that our declarative composition in HTML (<div data-module="">) is parsed by the composer, and consults the advisor about whether the module should be loaded (data-condition) and which module to load (data-module or data-variant), so that the dependency injector (DI, wire.js) can load and apply the correct spec and application module:

Declarative Composition

Detections for screen size and device features that are used to build responsive applications are sometimes implemented deep inside application logic. This responsibility should be laid elsewhere, decoupled more from the particular applications. We are already doing our (responsive) layout composition with HTML and CSS, so responsive applications fit in naturally. You could think of the HTML as an IOC container to compose applications.

You might not like to put (even) more information in the HTML. And honestly, I don’t like it at all. But it’s the price to pay for optimized performance when scaling up. Otherwise, we would have to make another request to find out whether and which module to load, which defeats the purpose.

Wrapping Up

I think the combination of declarative application composition, responsive module loading and module extension opens up a boatload of options. It gives you a lot of freedom to implement application modules the way you want, while supporting a high level of performance, maintainability and software design.

Performance and Build

Sometimes RWD actually decreases the performance of a website when implemented superficially (such as by simply adding some media queries or extra JavaScript). But for RWA, performance is actually what drives the responsive injection of modules or variants of modules. In the spirit of mobile first, load only what is required (and enhance from there).

Looking at the build process to minify and optimize applications, we can see that the challenge lies in finding the right approach to optimize either for a single application or for reusable application modules across multiple pages or contexts. In the former case, concatenating all resources into a single JavaScript file is probably best. In the latter case, concatenating resources into a separate shared core file and then packaging application modules into separate files is a sound approach.

A Scalable Approach

Responsive behavior and complete RWAs are powerful in a lot of scenarios, and they can be implemented using various patterns. We have only scratched the surface. But technically and conceptually, the approach is highly scalable. Let’s look at some example scenarios and patterns:

  • Sprinkle bits of behavior onto static content websites.
  • Serve widgets in a portal-like environment (think a dashboard, iGoogle or Netvibes). Load a single widget on a small screen, and enable more as screen resolution allows.
  • Compose context-aware applications in HTML using reusable and responsive application modules.

In general, the point is to maximize portability and reach by building on proven concepts to run applications on multiple platforms and environments.

Future-Proof and Portable

Some of the major advantages of building applications in HTML5 is that they’re future-proof and portable. Write HTML5 today and your efforts won’t be obsolete tomorrow. The list of platforms and environments where HTML5-powered applications run keeps growing rapidly:

  • As regular Web applications in browsers;
  • As hybrid applications on mobile platforms, powered by Apache Cordova (see note below):
    • iOS,
    • Android,
    • Windows Phone,
    • BlackBerry;
  • As Open Web Apps (OWA), currently only in Firefox OS;
  • As desktop applications (such as those packaged by the Sencha Desktop Packager):
    • Windows,
    • OS X,
    • Linux.

Note: Tools such as Adobe PhoneGap Build, IBM Worklight and Telerik’s Icenium all use Apache Cordova APIs to access native device functionality.

Demo

You might want to dive into some code or see things in action. That’s why I created a responsive Web apps repository on GitHub, which also serves as a working demo.

Conclusion

Honestly, not many big websites (let alone true Web applications) have gone truly responsive since The Boston Globe. However, looking at deciding factors such as cost, distribution, reach, portability and auto-updating, RWAs are both a huge opportunity and a big challenge. It’s only a matter of time before they become much more mainstream.

We are still looking for ways to get there, and we’ve covered just one approach to building RWAs here. In any case, declarative composition for responsive applications is quite powerful and could serve as a solid starting point.

(al) (ea)

© Lars Kappert for Smashing Magazine, 2013.

0
Your rating: None
Original author: 
Eric Johnson

Oculus VR's Palmer Luckey, left, and Nate Mitchell, right. At center, AllThingsD's Lauren Goode tries out the Oculus Rift at CES 2013.

Oculus VR’s Palmer Luckey, left, and Nate Mitchell, right. At center, AllThingsD’s Lauren Goode tries out the Oculus Rift at CES 2013.

This is the second part of our two-part Q&A with Palmer Luckey and Nate Mitchell, the co-founders of virtual-reality gaming company Oculus VR. In Part One, Luckey and Mitchell discussed controlling expectations, what they want from developers, and the challenges of trying to make games do something radically different.

AllThingsD: What do you guys think about Google Glass? They’ve got their dev kits out right now, too, and –

Palmer Luckey: — What’s Google Glass? [laughs]

No, seriously, they’re doing something sort of similar with getting this wearable computing device to developers. Does the early buzz about Glass worry you?

Luckey: No. They’re not a gaming device, and they’re not a VR device, and they’re not an immersive device, and they’re five times more expensive than us.

Nate Mitchell: It’s just a completely different product. Wearable computing is super-interesting, and we’d love to see more wearable computing projects in the market. At Oculus, especially, we’re excited about the possibilities of Google Glass. We’ve seen it, we’ve checked it out, it’s very cool. But if you bring them together –

Luckey: Our image size is like 15 times larger than theirs. It’s like the difference between looking at a watch screen and a 60-inch monitor. It’s just an enormous difference.

snlglass

Mitchell: With the Rift, you’re in there. You’re totally immersed in the world. I think one of the things people keep bringing up (with Glass) is the awkward, the social aspect. For the Rift, you strap into this thing, and you’re gone.

Luckey: It’s about being inside the virtual world, not caring about the real one.

Mitchell: You could put your Glass on in the virtual space.

Luckey: We could do that! We could simulate Glass. … It’s not that hard. You just have a tiny heads-up display floating there. A really tiny one.

Mitchell: I like it.

“Okay, Rift, take a picture. Okay, Rift, record a video …”

Luckey: There’s actually Second Life mods like that. People sell heads-up displays that you can buy.

Mitchell: Really?

Luckey: And they put information in there like distance to waypoints and stuff.

Mitchell: Oh, that’s cool!

Luckey: Yeah, they overlay it on the screen when your character’s wearing it.

I never really “got” Second Life. Minecraft, I can wrap my head around quickly. But Second Life …

Luckey: It’s very difficult to get into. There’s a steep learning curve. The last time I went into Second Life was to buy bitcoins from a crazy guy who was selling them below market value, but you had to go into Second Life to meet with him.

Mitchell: The underbelly of the Internet.

Luckey: They’re actually working on Oculus Rift support, though. The kind of people who make games like Second Life definitely see the potential for virtual reality — being able to step into your virtual life.

And if you’re completely immersed in the game, I guess that wherever you’re playing, you need to trust whoever’s around you.

Mitchell: Absolutely. There’s already some sneaking up on people happening in the office. Someone’s developing, they’re testing the latest integration, and then Palmer comes up and puts his hands on their shoulders: “Heyyyy, Andrew! What’s going on?” There’s a trust factor.

Luckey: Have you seen the Guillotine Simulator? (video below) Some people are showing that without even telling the person what it is: “Here, check this out!” “Whoa, what’s going on?” And then — [guillotine sound effect]

Mitchell: One thing that that does lead into is, we’re exploring ways to just improve the usability of the device. When you put on the Rift, especially with the dev kit, you’re shut off from the outside world. What we’re looking at doing is how can we make it easy to pull it off. Right now, you have to slip it over your head like ski goggles. The dev kit was designed to be this functional tool, not the perfect play-for-10-hours device. With the consumer version, we’re going for that polished user experience.

What about motion sickness? Is it possible to overcome the current need for people to only play for a short period of time on their first go?

Luckey: The better we make the hardware, the easier it’ll be for people to just pick up and play. Right now, the hardware isn’t perfect. That’s one of the innate problems of VR: You’re trying to make something that tricks your brain into thinking it’s real. Your brain is very sensitive at telling you things are wrong. The better you can make it, the more realistic you can make it, the more easily your brain’s gonna accept the illusion and not be throwing warning bells.

You mentioned in one of your recent speeches that the Scout in Team Fortress 2 –

Luckey: — he’s running at like 40 miles per hour. But it’s not just, “Oh, I’m running fast.” It’s the physics of the whole thing. In real life, if you are driving at 40mph, you can’t instantly start moving backward. You can’t instantly start strafing sideways. You have inertia. And that’s something that, right now, games are not designed to have. You’re reacting in these impossible ways.

Mitchell: In that same vein, just as Palmer’s saying the hardware’s not perfect yet, a huge part of it is the content.

Luckey: You could make perfect hardware. Pretend we have the Matrix. Now you take someone and put them in a fighter jet and have them spinning in circles. That’s going to make someone sick no matter how good it is, because that actually does make people sick. If you make perfect hardware, and then you do things that make people sick in real life, you’re gonna make them sick in VR, too. Right now, there’s lots of things going on in games that don’t make people sick only because they’re looking at them on a screen. Or, in so many games, they’ll have cutscenes where they take control of the camera and shake it around. You don’t want to do that in VR because you’re not actually shaking around in real life.

You’re changing the experience that you have previously established within VR.

Mitchell: It breaks the immersion.

Luckey: And that’s why it’s so hard to instantly transfer. In the original version of Half Life 2, when you’d go into a new space for the first time, the whole game would just freeze for a second while it loads. It’s just a short freeze, but players were running along or driving along and all of a sudden, jjt! Now it looks like the whole world’s dragging along with you, and a lot of people feel very queasy when that happens.

Mitchell: It comes back to content. My talk at GDC was very specifically about how developing for VR is different from a 2-D monitor. All those things like cutscenes, storytelling, scale of the world — if the player is at four feet on the 2-D monitor and you put them in there, they immediately notice. They look down and they have the stereo cues: “I’m a midget!” So you make them taller, and now they don’t fit through doors. We really do believe that, at first, you’re going to see these ports of existing games, but the best “killer app” experiences are going to come from those made-for-VR games.

Luckey: And that’s not even to say it has to be new franchises. It doesn’t have to be a new type of game. But you want the content to be designed specifically for the hardware.

Mitchell: It’s just like the iPhone. The best games come from developers pairing hardware and software.

 Dive Into Media.

Oculus VR CEO Brendan Iribe testing out the Rift at D: Dive Into Media.

And that’s the 10,000-foot view: Does VR change game design in a fundamental way?

Mitchell: Yes. Fundamentally. Absolutely. I think, right now, there’s this great renaissance in the indie community. Indie developers are doing awesome things. If you look at games like The Walking Dead, you’ve got the mainstream genres here. You’re going to have a lot of these indie games start to feel more natural in virtual reality, because that’s almost, like, the intended experience.

Luckey: And not to invent a whole new genre on the fly, but you don’t see many first-person card games or something. There’s a lot of card game videogames, but there’s not many that are first-person because it wouldn’t make any sense to do.

Like a poker game where you could look around the table and read people’s reactions?

Mitchell: Exactly.

Luckey: And you could have all kinds of things integrated into it. I guess that would fit into the first-person-shooter genre, but not really, because you’re not moving and you’re not shooting. You’re just playing cards.

Mitchell: And if you look at the research that’s been done on virtual characters, it’s the type of thing where, if you smile at me in VR, even if you’re an NPC (non-playable character), I’m much more likely to smile back. Your brain is tricked into believing you’re there.

Luckey: There’s also fascinating research on confidence levels in VR, even tiny things. There was a study where a bunch of people performed tasks in real life, in a control group, and then performed them in VR. And the only difference is that one group in VR was about six inches taller than the other group. So, one was shorter than the NPC they were interacting with, one was taller. Universally, all of the “taller” people exhibited better negotiation with the NPCs. Then, they took them out (of the VR simulation) and they redid the (real-world) study, putting everyone back in another trial with a physical person. The people who’d been tall in VR and negotiated as a taller person did better when they went back into the real negotiation as well. It’s ridiculous.

Mitchell: That’s the sort of thing we’re super-excited about. That’s the dream.

And do you have a timeline for when –

Mitchell: When the dream comes to fruition?

Luckey: It’s a dream, man! Come on! [laughs]

Not when it comes to fruition. Are there milestones for specific accomplishments along the way?

Luckey: Sure, we have them, internally. [laughs]

Mitchell: We have a road map, but like we keep saying, a huge part of this is content. Without the content, it’s just a pair of ski goggles.

Luckey: And we don’t even know, necessarily, what a road map needs to look like. We’re getting this feedback, and if a lot of people need a certain feature — well, that means it’s going to take a little longer.

Mitchell: But we have a rough road map planned, and a lot of exciting stuff planned that I think you’ll see over the course of the next year.

And is there a timeline for when the first consumer version comes out?

Mitchell: It’s TBD. But what we can say is, Microsoft and Sony release their dev kits years in advance before they get up onstage and say, “The Xbox One is coming.” We went for the same strategy, just open and publicly.

Luckey: And we don’t want to wait many years before doing it.

Mitchell: Right. So, right now, we’re giving developers the chance to build content, but they’re also co-developing the consumer version of the Rift with us. Once everyone’s really happy with it, that’s when you’ll see us come to market.

Luckey: And not sooner. We don’t want to announce something and then push for that date, even though we know we can make it better.

IMG_4929

And what about the company, Oculus VR? Is this dream you’re talking about something you have to realize on your own? Do you want to someday get acquired?

Luckey: Our No. 1 goal is doing it on our own. We’re not looking to get acquired, we’re not looking to flip the company or anything. I mean, partnering with someone? Sure, we’re totally open to discussions. We’re not, like, we want to do this with no help.

But you wouldn’t want to be absorbed into a bigger company that’s doing more than just VR.

Mitchell: The goal has been to build great consumer VR, specifically for gaming. We all believe VR is going to be one of the most important technologies of –

Luckey: — ever!

Mitchell: Basically.

Not to be too hyperbolic or anything.

Luckey: It’s hard not to be. It’s like every other technological advance could practically be moot if you could do all of it in the virtual world. Why would you even need to advance those things in the real world?

Mitchell: Sooo …

Luckey: [laughs]

Mitchell: With that in mind, we have to figure out how we get there. But right now, we’re doing it on our own.

Luckey: And we think we can deliver a good consumer VR experience without having to partner with anyone. We’re open to partnering, but we don’t think we have to. We’re not banking on it.

And how does being based in southern California compare to being closer to a more conventional tech hub like Silicon Valley?

Mitchell: Recruiting is a little harder for us. But overall, we’ve been able to attract incredible talent.

Luckey: And if you’re in Silicon Valley, it’s probably one of the easiest places to start a company in terms of hiring people. But VR is such a tiny field, it’s not like all of a sudden we’re going to go to Silicon Valley and there’s, like, thousands of VR experts. Now, if I’m a Web company or a mobile company –

Mitchell: — that’s where I’d want to be.

Luckey: But in this case, these people aren’t necessarily all up in Silicon Valley. We’ve hired a bunch of people from Texas and Virginia and all these other places. It’s a niche industry. We actually have the biggest concentration of people working in consumer VR right now. And a lot of the top talent we get, they don’t care where we are, as long as it’s not, like, Alaska. They just really want to work on virtual reality, and there’s no one else doing it like we are.

0
Your rating: None
Original author: 
Eric Johnson

Oculus VR's Palmer Luckey, left, and Nate Mitchell, right. At center, AllThingsD's Lauren Goode tries out the Oculus Rift at CES 2013.

Oculus VR’s Palmer Luckey, left, and Nate Mitchell, right. At center, AllThingsD’s Lauren Goode tries out the Oculus Rift at CES 2013.

There were plenty of great onstage interviews at D11 last week, but — as attendees doubtless know — the conversations that happen offstage are often just as engaging. Such was the case on the last day of the conference, when Oculus VR co-founders Palmer Luckey and Nate Mitchell drove up from their office in Irvine, Calif., for lunch and an hour-long chat.

Oculus is a 30-person startup focused on just one thing: Virtual-reality videogames, by way of a wearable headset that plugs into gamers’ PCs. Its much-anticipated VR headset, the Oculus Rift, was funded on Kickstarter last year to the tune of $2.4 million, and an early version is now in the hands of thousands of game developers. A consumer version is on the way — though the company has yet to announce a release date.

At conferences like this year’s GDC, Luckey has publicly acknowledged that the first versions of the headset won’t be perfect, because developers are still learning what game mechanics work (or don’t) in VR. In this wide-ranging Q&A, Luckey and Mitchell told AllThingsD about that learning process, the Rift’s limitations, its ballpark price point, what they want from developers, messing with coworkers wearing the Rift, and how it stacks up to other next-gen technology like the Xbox Kinect and Google Glass.

For easier reading, we’ve split the chat into two parts. Part One is below. And here’s Part Two.

(Before we begin, a sad note: This interview took place last Thursday, one day before Oculus VR’s lead engineer and fellow co-founder Andrew Reisse was struck and killed as a bystander during a high-speed car chase. The company memorialized Reisse as a “brilliant computer graphics engineer” on its blog on Saturday).

AT1T7403-X2Asa Mathat / D: All Things Digital

AllThingsD: How do you control people’s high expectations for the Rift? At GDC, Palmer called virtual reality the “holy grail of gaming,” but was quick to clarify that the first version you release won’t completely fulfill that promise.

Palmer Luckey: The developer kit, especially, but yeah, even the first consumer Rift has a long way to go. People who research it tend to have good expectations, but there’s two other sets: You have people who think that VR tech is already super-advanced, that it’s like “The Matrix” already, and that we just happen to be cheaper. And then you have people who think that it’s completely broken and hopeless. The best way is to get them to look inside of a Rift, and usually they’re like, “Oh, I get it. It’s not the Matrix, but it’s also not terribly broken.”

Who’s the audience for the Rift? Who’s going to really appreciate it?

EQ7G8237-X2Luckey: I don’t think it’s just hardcore gamers. At GDC, Valve talked about how players who were very skilled at Team Fortress 2 felt like the Rift lowered their skill level. I play a ton of TF2: You’re jumping off things and spinning around and then instantly snapping back, constantly whipping back and forth as you walk along. But what they found with people who didn’t play games as much, who weren’t TF2 players — they reported that it increased their perceived skills. I think the Rift can open up the possibility, for a lot of games that have been “hardcore games,” for normal people to play them. They have the right muscle memory built up. Every day, they look around and they move their head to look around. It’s not a huge leap to do that inside of a video game when you have the proper tools.

Nate Mitchell: It also totally depends upon the content. We’ve already seen some people do Minecraft mods (unofficial modifications to the original game to support the Oculus Rift). We have the families in the office, they bring in their kids, and you’ve got 10 kids playing Minecraft in our conference room on the Rift, on the same server. That shows you that there is this huge audience of all sorts of people.

Luckey: In fact, we’ve done that some in the office, too. [laughs] It’s not just for the kids.

Mitchell: Right now, the audience is game developers, and the content is super-key to the whole user experience. Having content that appeals to those types of people, that’s what we want.

Do you need a killer app?

Mitchell: Definitely. We could use a couple killer apps. Ideally, we’d have a game for the niche market. You’d have Call of Duty 9 over here, and something like Minecraft over here, and a wide swath of games in between.

But what about a killer app that’s exclusively for the Rift? A lot of Wii owners only played Wii Sports. Do you need something like that to distinguish the game play?

Mitchell: I won’t say that we need it, but I will say that we want it. That’s something we are trying to figure out. Is it something someone else is going to develop? We’ve discussed — does it make sense to do something ourselves internally? We’re not sure yet. Right now, the focus has been, “Let’s build the tools, and help the developers get there.”

Luckey: It doesn’t make sense for our first focus to be to hire a bunch of game developers to sit and try and figure out what works best in VR, when there’s literally thousands of other people that are willing to figure it out for themselves. They want the privilege of being the first to work in this space.

OculusRift

How does the Rift fit in with other new gaming hardware coming out, like the Xbox One and PlayStation 4?

Luckey: Right now, it’s just for PC games, because that’s the open platform. Mobile support’s also possible, but that’s just more of a technical problem — phones are not powerful enough to provide a good VR experience right now. There’s no technical reason that the Rift can’t work on consoles. It has standard input/outputs, it wouldn’t be a lot of work. It’s just a matter of console manufacturers deciding to license it as a peripheral. They’re the gatekeepers.

Have you talked to them about that?

Luckey: We can’t say.

Mitchell: I think when you look at this upcoming console generation, we are this black sheep, doing something completely different, but we like that. We’re aiming for what we consider to be next-generation gaming. Xbox and PlayStation, they’re doing awesome stuff. And we’re big fans. That said, the Rift is going to be something entirely different.

Luckey: And we’re focusing specifically on gaming. We’re not trying to make a multi-platform home media hub for the family.

How much is the consumer version of the Rift going to cost?

Luckey: The current developer kits are $300. We don’t know what the consumer version’s going to cost — it could be more, could be less. But we’re looking to stay in that same ballpark. We’re not going to be charging $800 or something. We have to be affordable. If you’re not affordable, you may as well not exist for a huge segment of the market.

I guess you would know, since you have the world’s largest private collection of VR headsets.

Mitchell: [laughs]

Luckey: I’m one of the few people where it’s different. I would spend whatever it was. Gamers are not known to be the most affluent population of people. If something’s even $600, it doesn’t matter how good it is, how great of an experience it is — if they just can’t afford it, then it really might as well not exist. We’re going for the mainstream, but time will tell what the market is.

Mitchell: A big part of it’s going to be the content. If it’s only Call of Duty 9, it’s only going to be the niche hardcore gamers. If we can get other stuff on there, which I think we’re already making exciting progress on, I think it’s going to be a lot broader. The three tenets for us are immersion, wearability and affordability. If we can nail those three things, that’s the killer combination that makes it a consumer VR device.

Luckey: The other thing is, it’s possible to make better hardware if you sell it at that lower price point. When you can sell thousands of something, or tens or hundreds or millions of something, you can afford to put better components into it than if you were only making a hundred of these things for $10,000 each. There are people who’ve said, “You should sell a version with better specs for $1,000,” but it’d be better to sell it for $200 and sell more of them.

What are the limitations of the Rift right now, beyond needing to be wired into a PC?

Mitchell: We don’t have positional tracking right now.

Luckey: [That means] you can’t track movement through space, you can only track rotation.

Mitchell: That’s a big one, something we’d love to solve for the consumer version. The only other “limitation,” I’d say right now — well, there’s things we want to improve, like weight. The more comfortable it is, the more immersive it is. So, there’s that. There’s resolution. We want to bring the resolution up for the consumer version.

IMG_4587And, for the foreseeable future, will players still need to use a handheld console-like controller?

Luckey: We don’t know yet.

Mitchell: Human-computer interaction and user input, especially for VR, is something that we’re constantly researching and evaluating.

Luckey: The reason we’re using gamepads (now) is that everyone knows how to use it, so we don’t need to teach a new [control] device while we’re demoing. But we do know that a keyboard, mouse or gamepad isn’t the best possible VR gaming interface.

Mitchell: It’s another abstraction. We’d love to — well, we’re exploring the possibilities.

Luckey: [waving hand] Use your imagination. [he and Mitchell both laugh]

Mitchell: Microsoft, with the new Kinect, is doing some really interesting stuff. Leap Motion is doing incredible stuff. This tech is out there. It’s a matter of packaging it just right for virtual reality, so that we’re putting players totally inside the game. We always joke, you want to look down in the game and go, ‘Yes, I’m Batman!’ And then you pull out your lightsaber or whatever it is — I know, I’m destroying canon here –

Luckey: — I, I’ll just leave that.

Mitchell: [laughs]

Luckey: One of the things I talked about at GDC is that other game consoles, it’s very abstract. You’re controlling something on a screen, using a controller that’s nothing like how you interact in real life. If you hand a person who doesn’t game a 360 controller, it’s like, “Here’s a 16-button, dual analog controller. Use it!” It’s very difficult for someone to pick it up.

And that was the brilliance of the Wiimote, right? If you want to bowl, here’s the controller, just move it like you’re bowling.

Luckey: Even then, it was an abstraction. But it’s clear you want a control interface so that people feel they’re inside the game. It’s clear that you want to take it to the level where they’re not just looking around in the game, but they’re interacting in the same way that they would interact with real life. On Kinect, no matter how great the tracking is, you’re still controlling something on a screen. You don’t feel like you’re inside of the game if you’re looking at a screen in your living room. It’s never going to feel good until you can feel like you’re actually that person.

In Part Two of this Q&A, Luckey and Mitchell discuss Google Glass, motion sickness, messing with coworkers, and their long-term plans for the company.

0
Your rating: None
Original author: 
Caleb Barlow

mobilesec380

Mobile phone image copyright Oleksiy Mark

When it comes to mobile computing, many organizations either cringe at the fear of security risks or rejoice in the business potential. On one hand, mobile is revolutionizing business operations — improving operational efficiency, enhancing productivity, empowering employees and delivering an engaging user experience. On the other hand, sensitive data that used to be housed in a controlled environment of a company desktop or even laptop is now sitting in an employee’s back pocket or purse.

In today’s ultra-connected world, it can seem like threats are all around us. High-profile breaches and attacks from hacker groups have organizations of all sizes — from multinational enterprises to mom-and-pop shops — doubling down on security and making sure there aren’t any cracks in their defenses. Mobile security doesn’t have to be the Achilles’ heel that leads to a breach. New, innovative solutions for securing mobile devices at the application level are rapidly hitting the market and the latest IBM X-Force report indicates that by 2014, mobile computing will be more secure than traditional desktops. Phones, tablets and other devices are a staple of the 21st century workplace and in order to fully embrace this technology, businesses must be certain they’re well protected and secure.

Do You Know Where Your Data Is?

Tackling mobile security can seem like a daunting task. The IBM X-Force report also indicates a 19 percent increase in the number of exploits publicly released that can be used to target mobile devices. Making the task more challenging is the fact that — especially in the case of BYOD — the line between professional and personal data is more blurred on mobile platforms than anywhere before. According to Gartner, by 2014, 90 percent of organizations will support corporate applications on personal devices. This means that devices being used to connect with enterprise networks or create sensitive company data are also being used for social networking and to download mobile apps, leaving organizations with the predicament of how to manage, secure and patrol those devices. From the point of view of a hacker, a mobile device becomes an ideal target as it has access to the enterprise data as well as personal data that can be used to mount future attacks against your friends and colleagues.

Mobile apps are a great example of why mobile security tends to raise concerns among security professionals and business leaders. Employees install personal apps onto the same devices they use to access their enterprise data, but are not always careful or discriminating about the security of those apps — whether they are the real version or a manipulated version that will attempt to steal corporate data. According to a recent report by Arxan Technologies, more than 90 percent of the top 100 mobile apps have been hacked in some capacity. Some free mobile apps even demand access to an employee’s contact list in order to function correctly. Just pause and think about that for a second. Would you give your entire contact list to a complete stranger? That’s effectively what you are doing when you install many of these popular applications. If an organization takes a step back and really considers what employees are agreeing to, willingly or not, the results can be troublesome. So the challenge remains — how to get employees to recognize and understand just how vulnerable their mobile device can be to an enterprise.

Mitigating Mobile Risks: Why it’s easier than you think

Mobile app security and device management do not have to be a company’s security downfall. By employing intelligent security solutions that adapt to the requirements of a specific context, businesses can mitigate operational risk and unleash the full potential of mobility.

The key to mitigating security risks when it comes to mobile devices accessing enterprise data is access control. This may include passcode locks, data protection and malware and virus prevention. With that said, IT security priorities should focus on practices, policies and procedures, such as:

  • Risk analysis: Organizations must understand what enterprise data is on employee devices, how it could be compromised and the potential impact of the comprise (i.e. What does it cost? What happens if the device is lost? Is the data incidental or crucial to business?).
  • Securing the application: In the pre-mobile, personal computer era, simply securing the device and the user were sufficient. When it comes to mobile devices, we also need to think about securing the application itself. As a typical application is downloaded from a store, the end user really has no idea who built the application, what it actually does with your data or how secure it is. Corporate applications with sensitive data need to be secure in their own right.
  • Secure mobile access — authentication: Since mobile devices are shared, it’s important to authenticate both the user and the device before granting access and to look at the context of the user requesting access based on factors like time, network, location, device characteristics, role, etc. If the context appears to be out of line with normal behavior, appropriate counter measures can be taken.
  • Encryption: Simply put, if the data is sensitive it needs to be encrypted both while at rest as well as while in motion on the network.

Once an enterprise has defined its security policy — establishing set policies/procedures regarding content that is allowed to be accessed on devices, how it’s accessed and how the organization will handle lost/stolen devices that may contain business data — mobile technology solutions can help ensure that no opening is left unguarded.

So if security concerns are holding you back from “going mobile,” rest assured — there are many companies that have embraced trends like “Bring Your Own Device” without sending their Chief Security Officers into a panic. As long as organizations take the right steps and continually revisit their security posture to ensure that every endpoint is secured and that the proper technology is in place, it really is possible to be confident about your mobile security strategy.

Caleb Barlow is part of the executive team in IBM’s Security division. He manages three portfolios — Application Security, Data Security and Mobile Security. In addition to his day job, Caleb also hosts a popular Internet Radio show focused on IT Security with an audience averaging over 20k listeners per show.

0
Your rating: None
Original author: 
Stéphanie Walter

  

Responsive Web design has been around for some years now, and it was a hot topic in 2012. Many well-known people such as Brad Frost and Luke Wroblewski have a lot of experience with it and have helped us make huge improvements in the field. But there’s still a whole lot to do.

In this article, we will look at what is currently possible, what will be possible in the future using what are not yet standardized properties (such as CSS Level 4 and HTML5 APIS), and what still needs to be improved. This article is not exhaustive, and we won’t go deep into each technique, but you’ll have enough links and knowledge to explore further by yourself.

The State Of Images In Responsive Web Design

What better aspect of responsive Web design to start off with than images? This has been a major topic for a little while now. It got more and more important with the arrival of all of the high-density screens. By high density, I mean screens with a pixel ratio higher than 2; Apple calls these Retina devices, and Google calls them XHDPI. In responsive Web design, images come with two big related challenges: size and performance.

Most designers like pixel perfection, but “normal”-sized images on high-density devices look pixelated and blurry. Simply serving double-sized images to high-density devices might be tempting, right? But that would create a performance problem. Double-sized images would take more time to load. Users of high-density devices might not have the bandwidth necessary to download those images. Also, depending on which country the user lives in, bandwidth can be pretty costly.

The second problem affects smaller devices: why should a mobile device have to download a 750-pixel image when it only needs a 300-pixel one? And do we have a way to crop images so that small-device users can focus on what is important in them?

Two Markup Solutions: The <picture> Element and The srcset Attribute

A first step in solving the challenge of responsive images is to change the markup of embedded images on an HTML page.

The Responsive Images Community Group supports a proposal for a new, more flexible element, the <picture> element. The concept is to use the now well-known media queries to serve different images to different devices. Thus, smaller devices would get smaller images. It works a bit like the markup for video, but with different images being referred to in the source element.

The code in the proposed specification looks like this :


<picture width="500"  height="500">     
  <source  media="(min-width: 45em)" src="large.jpg">
  <source  media="(min-width: 18em)" src="med.jpg">
  <source  src="small.jpg">
  <img  src="small.jpg" alt="">
  <p>Accessible  text</p>
</picture>

If providing different sources is possible, then we could also imagine providing different crops of an image to focus on what’s important for smaller devices. The W3C’s “Art Direction” use case shows a nice example of what could be done.

Picture element used for artistic direction
(Image: Egor Pasko)

The solution is currently being discussed by the W3C Responsive Images Community Group but is not usable in any browser at the moment as far as we know. A polyfill named Picturefill is available, which does pretty much the same thing. It uses a div and data-attribute syntax for safety’s sake.

A second proposal for responsive images markup was made to the W3C by Apple and is called “The srcset Attribute”; its CSS Level 4 equivalent is image-set(). The purpose of this attribute is to force user agents to select an appropriate resource from a set, rather than fetch the entire set. The HTML syntax for this proposal is based on the <img> tag itself, and the example in the specification looks like this:


<img  alt="The Breakfast Combo" 
  src="banner.jpeg"
  srcset="banner-HD.jpeg  2x, banner-phone.jpeg 100w, banner-phone-HD.jpeg 100w 2x">

As you can see, the syntax is not intuitive at all. The values of the tag consist of comma-separated strings. The values of the attribute are the names or URLs of the various images, the pixel density of the device and the maximum viewport size each is intended for.

In plain English, this is what the snippet above says:

  • The default image is banner.jpeg.
  • Devices that have a pixel ratio higher than 2 should use banner-HD.jpeg.
  • Devices with a maximum viewport size of 100w should use banner-phone.jpeg.
  • Devices with a maximum viewport size of 100w and a pixel ratio higher than 2 should use banner-phone-HD.jpeg.

The first source is the default image if the srcset attribute is not supported. The 2x suffix for banner-HD.jpeg means that this particular image should be used for devices with a pixel ratio higher than 2. The 100w for banner-phone.jpeg represents the minimum viewport size that this image should be used for. Due to its technical complexity, this syntax has not yet been implemented in any browser.

The syntax of the image-set() CSS property works pretty much the same way and enables you to load a particular background image based on the screen’s resolution:


background-image: image-set(  "foo.png" 1x,
  "foo-2x.png"  2x,
  "foo-print.png"  600dpi );

This proposal is still a W3C Editor’s Draft. For now, it works in Safari 6+ and Chrome 21+.

Image Format, Compression, SVG: Changing How We Work With Images on the Web

As you can see, these attempts to find a new markup format for images are still highly experimental. This raises the issue of image formats themselves. Can we devise a responsive solution by changing the way we handle the images themselves?

The first step would be to look at alternative image formats that have a better compression rate. Google, for example, has developed a new image format named WebP, which is 26% smaller than PNG and 25 to 34% smaller than JPEG. The format is supported in Google Chrome, Opera, Yandex, Android and Safari and can be activated in Internet Explorer using the Google Chrome Frame plugin. The main problem with this format is that Firefox does not plan to implement it. Knowing this, widespread use is unlikely for now.

Another idea that is gaining popularity is progressive JPEG images. Progressive JPEG images are, as the name suggests, progressively rendered. The first rendering is blurry, and then the image gets progressively sharper as it renders. Non-progressive JPEG images are rendered from top to bottom. In her article “Progressive JPEGs: A New Best Practice,” Ann Robson argues that progressive JPEGs give the impression of greater speed than baseline JPEGs. A progressive JPEG gives the user a quick general impression of the image before it has fully loaded. This does not solve the technical problems of performance and image size, though, but it does improve the user experience.

Another solution to the problems of performance and image size is to change the compression rate of images. For a long time, we thought that enlarging the compression rate of an image would damage the overall quality of the image. But Daan Jobsis has done extensive research on the subject and has written an article about it, “Retina Revolution.” In his experiments, he tried different image sizes and compression rates and came up with a pretty interesting solution. If you keep the image dimensions twice the displayed ones but also use a higher compression rate, then the image will have a smaller file size than the original, but will still be sharp on both normal and high-density screens. With this technique, Jobsis cut the weight of the image by 75%.

Image compression example
Daan Jobsis’ demonstration of image compression.

Given the headaches of responsive images, the idea of gaining pixel independence from images wherever possible is seducing more and more designers and developers. The SVG format, for example, can be used to create all of the UI elements of a website and will be resolution-independent. The elements will scale well for small devices and won’t be pixellated on high-density devices. Font icons are another growing trend. They involve asigning icon glyphs to certains characters of the font (like the Unicode Private Area ones), giving you the flexibility of fonts. Unfortunately, the solution doesn’t work with pictures, so a viable markup or image format is eagerly expected.

Responsive Layout Challenge: Rearrange And Work With Content Without Touching the HTML?

Let’s face it, the fluid grids made of floats and inline blocks that we use today are a poor patch waiting for a better solution. Working with layout and completely rearranging blocks on the page for mobile without resorting to JavaScript is a nightmare right now. It’s also pretty inflexible. This is particularly significant on websites created with a CMS; the designer can’t change the HTML of every page and every version of the website. So, how can this be improved?

Four CSS3 Layout Solutions That Address the Flexible Layout Problem

The most obvious possible solution is the CSS3 flexible box layout model (or “flexbox”). Its current status is candidate recommendation, and it is supported in most major mobile browsers and desktop browsers (in IE starting from version 10). The model enables you to easily reorder elements on the screen, independent of the HTML. You can also change the box orientation and box flow and distribute space and align according to the context. Below is an example of a layout that could be rearranged for mobile. The syntax would look like this:


.parent {
  display: flex;
  flex-flow: column; /* display items in columns */
}

.children {
  order: 1; /* change order of elements */
}

Flexbox as an example

The article “CSS3 Flexible Box Layout Explained” will give you a deeper understanding of how flexbox works.

Another solution quite close to the flexbox concept of reordering blocks on the page, but with JavaScript, is Relocate.

A second type of layout that is quite usable for responsive design today is the CSS3 multiple-column layout. The module is at the stage of candidate recommendation, and it works pretty well in most browsers, expect for IE 9 and below. The main benefit of this model is that content can flow from one column to another, providing a huge gain in flexibility. In terms of responsiveness, the number of columns can be changed according to the viewport’s size.

Setting the size of the columns and letting the browser calculate the number of columns according to the available space is possible. Also possible is setting the number of columns, with the gaps and rules between them, and letting the browser calculate the width of each column.

CSS3 Multiple Column layout

The syntax looks like this:


.container {
  column-width: 10em ; /* Browser will create 10em columns. Number of columns would depend on available space. */
}

.container {
  columns: 5; /* Browser will create 5 columns. Column size depends on available space. */
  column-gap: 2em;
}

To learn more, read David Walsh’s article “CSS Columns.”

A third CSS3 property that could gain more attention in future is the CSS3 grid layout. This gives designers and developers a flexible grid they can work with to create different layouts. It allows content elements to be displayed in columns and rows without a defined structure. First, you would declare a grid on the container, and then place all child elements in this virtual grid. You could then define a different grid for small devices or change the position of elements in the grid. This allows for enormous flexibility when used with media queries, changes in orientation and so on.

The syntax looks like this (from the 2 April 2013 working draft):


 .parent {
   display: grid; /* declare a grid */
   grid-definition-columns: 1stgridsize  2ndgridsize …;
   grid-definition-rows: 1strowsize  2ndrowsize …;
}

.element {
   grid-column: 1; 
   grid-row: 1
}

.element2 {
   grid-column: 1; 
   grid-row: 3;
}

To set the sizes of columns and rows, you can use various units, as detailed in the specification. To position the various elements, the specification says this: “Each part of the game is positioned between grid lines by referencing the starting grid line and then specifying, if more than one, the number of rows or columns spanned to determine the ending grid line, which establishes bounds for the part.”

The main problem with this property is that it is currently supported only in IE 10. To learn more about this layout, read Rachel Andrew’s “Giving Content Priority With CSS3 Grid Layout.” Also, note that the specification and syntax for grid layouts changed on 2 April 2013. Rachel wrote an update on the syntax, titled “CSS Grid Layout: What Has Changed?

The last layout that might become useful in future if implemented in browsers is the CSS3 template layout. This CSS3 module works by associating an element with a layout “name” and then ordering the elements on an invisible grid. The grid may be fixed or flexible and can be changed according to the viewport’s size.

The syntax looks like this:


.parent {
   display: "ab"
            "cd" /* creating the invisible  grid */
}

.child1 {
   position: a;
}

.child2 {
   position: b;
}

.child3 {
   position: c;
}

.child4 {
   position: d;
} 

This renders as follows:

CSS3 template layout

Unfortunately, browser support for this CSS3 module is currently null. Maybe someday, if designers and developers show enough interest in this specification, some browser vendors might implement it. For the moment, you can test it out with a polyfill.

Viewport-Relative Units and the End of Pixel-Based Layout

Viewport-based percentage lengths — vw, vh, vm, vmin and vmax — are units measured relative to the dimensions of the viewport itself.

One vw unit is equal to 1% of the width of the initial containing block. If the viewport’s width is 320, then 1 vw is 1 × 320/100 = 3.2 pixels.

The vh unit works the same way but is relative to the height of the viewport. So, 50 vh would equal 50% of the height of the document. At this point, you might wonder what the difference is with the percentage unit. While percentage units are relative to the size of the parent element, the vh and vw units will always be relative to the size of the viewport, regardless of the size of their parents.

This gets pretty interesting when you want to, for example, create a content box and make sure that it never extends below the viewport’s height so that the user doesn’t have to scroll to find it. This also enables us to create true 100%-height boxes without having to hack all of the elements’ parents.

The vmin unit is equal to the smaller of vm or vh, and vmax is equal to the larger of vm or vh; so, those units respond perfectly to changes in device orientation, too. Unfortunately, for the moment, those units are not supported in Android’s browser, so you might have to wait a bit before using them in a layout.

A Word on Adaptive Typography

The layout of a website will depend heavily on the content. I cannot conclude a section about the possibilities of responsive layout without addressing typography. CSS3 introduces a font unit that can be pretty handy for responsive typography: the rem unit. While fonts measured in em units have a length relative to their parent, font measured in rem units are relative to the font size of the root element. For a responsive website, you could write some CSS like the following and then change all font sizes simply by changing the font size specified for the html element:


html {
   font-size: 14px;
}

p {
   font-size: 1rem /* this has 14px */
}

@media screen and (max-width:380px) {
   html {
      font-size: 12px; /* make the font smaller for mobile devices */
   }

   p {
      font-size: 1rem /* this now equals 12px */
   }
}

Except for IE 8 and Opera mini, support for rem is pretty good. To learn more about rem units, read Matthew Lettini’s article “In Defense of Rem Units.”

A Better Way To Work Responsively With Other Complex Content

We are slowly getting better at dealing with images and text in responsive layouts, but we still need to find solutions for other, more complex types of content.

Dealing With Forms on a Responsive Website

Generally speaking, dealing with forms, especially long ones, in responsive Web design is quite a challenge! The longer the form, the more complicated it is to adapt to small devices. The physical adaptation is not that hard; most designers will simply put the form’s elements into a single column and stretch the inputs to the full width of the screen. But making forms visually appealing isn’t enough; we have to make them easy to use on mobile, too.

For starters, Luke Wroblewski advises to avoid textual input and instead to rely on checkboxes, radio buttons and select drop-down menus wherever possible. This way, the user has to enter as little information as possible. Another tip is not to make the user press the “Send” button before getting feedback about the content of their submission. On-the-fly error-checking is especially important on mobile, where most forms are longer than the height of the screen. If the user has mistyped in a field and has to send the form to realize it, then chances are they won’t even see where they mistyped.

In the future, the new HTML5 form inputs and attributes will be a great help to us in building better forms, without the need for (much) JavaScript. For instance, you could use the required attribute to give feedback about a particular field on the fly. Unfortunately, support for this on mobile devices is poor right now. The autocomplete attribute could also help to make forms more responsive.

A mobile phone is a personal possession, so we can assume that data such as name and postal address will remain consistent. Using the autocomplete HTML5 attribute, we could prefill such fields so that the user doesn’t have to type all of that information over and over. There is also a whole list of new HTML5 inputs that can be used in the near future to make forms more responsive.

Dates in form elements are a good example of what can be improved with HTML5. We used to rely on JavaScripts to create date-pickers. Those pickers are quite usable on big desktop screens but very hard to use on touch devices. Selecting the right date with a finger is difficult when the touch zones are so small.

Different picker examples
How am I supposed to select a date when my finger is touching three dates at the same time?

A promising solution lies in the new HTML5 input type="date", which sets a string in the format of a date. The HTML5 input type="datetime" sets a string in the format of a date and time. The big advantage of this method is that we let the browser decide which UI to use. This way, the UI is automatically optimized for mobile phones. Here is what an input type="date" looks like on the desktop, on an Android phone and tablet (with the Chrome browser), and on the iPhone and iPad.

Mobile input type=date rendering
Renderings of input type="date" on different mobile devices.

Note that the screenshots were taken in my browser and on the Android phone, so the language automatically adapted to the system language (French). By using native components, you no longer have to adapt the language into different versions of the website.

For now, support for input type="date" on the desktop is absent except in Opera and Chrome. Native Android browsers don’t support it at all, but Chrome for Android does, and so does Safari on iOS. A lot still has to get done in order for us to be able to use this solution on responsive websites. Meanwhile, you could use a polyfill such as Mobiscroll for mobile browsers that don’t support it natively.

Apart from these HTML5 input solutions, attempts have been made to improve other design patterns, such as passwords on mobile and complex input formatting using masks. As you will notice, these are experimental. The perfect responsive form does not exist at the moment; a lot still has to be done in this field.

Dealing With Tables on a Responsive Website

Another content type that gets pretty messy on mobile and responsive websites is tables. Most table are oriented horizontally and present a lot of data at once, so you can see how getting it right on a small screen is pretty hard. HTML tables are fairly flexible — you can use percentages to change the width of the columns — but then the content can quickly become unreadable.

No one has yet found the perfect way to present tables, but some suggestions have been made.

One approach is to hide what could be considered “less important” columns, and provide checkboxes for the user to choose which columns to see. On the desktop, all columns would be shown, while on mobile, the number of columns shown would depend on the screen’s size. The Filament Group explains this approach and demonstrates it in one of its articles. The solution is also used in the table column toggle on jQuery Mobile.

Responsive table examples
Some examples of responsive tables.

A second approach plays with the idea of a scrollable table. You would “pin” a single fixed-size column on the left and then leave a scroll bar on a smaller part of the table to the right. David Bushell implements this idea in an article, using CSS to display all of the content in the <thead> on the left side of the table, leaving the user to scroll through the content on the right. Zurb uses the same idea but in a different way for its plugin. In this case, the headers stay at the top of the table, and the table is duplicated with JavaScript so that only the first column is shown on the left, and all other columns are shown on the right with a scroll bar.

Responsive table overflow example
Two examples of scrollable responsive tables

The big issue with scroll bars and CSS properties such as overflow: auto is that many mobile devices and tablets simply won’t display a visible scroll bar. The right area of the table will be scrollable, but the user will have no visual clue that that’s possible. We have to find some way to indicate that more content lies to the right.

A third approach is to reflow a large table and split up the columns into what essentially looks like list items with headings. This technique is used in the “reflow mode” on jQuery Mobile and was explained by Chris Coyier in his article “Responsive Data Tables.”

Responsive table reflow example
Reflowing a table responsively

Many other techniques exist. Which to use depends heavily on your project. No two projects are the same, so I can only show you how other people have dealt with it. If you come up with a nice solution of your own, please share it with the world in the comments below, on Twitter or elsewhere. We are in this boat together, and tables suck on mobile, really, so let’s improve them together!

Embedding Third-Party Content: The Responsive Iframe Problem

Many websites consist of embedded third-party content: YouTube or Vimeo videos, SlideShare presentations, Facebook applications, Twitter feeds, Google Maps and so on. A lot of those third parties make you use iframes to embed their content. But let’s face it: iframes are a pain to deal with in responsive design. The big problem is that iframes force a fixed width and height directly in your HTML code. Forcing a 100% width on the iframe would work, but then you would lose the ratio of the embedded content. To embed a video or slideshow and preserve the original ratio, you would have to find a workaround.

An HTML and CSS Workaround

Thierry Koblentz has written a good article titled “Creating Intrinsic Ratios for Video,” in which he proposes a way to embed responsive videos using a 16:9 ratio. This solution can be extended to other sorts of iframe content, such as SlideShare presentations and Google Maps. Koblentz wraps the iframe in a container with a class that we can target in CSS. The container makes it possible for the iframe to resize fluidly, even if the iframe has fixed pixel values in the HTML. The code, adapted by Anders M. Andersen, looks like this:


 .embed-container  {
   position: relative;
   padding-bottom: 56.25%; /* 16:9 ratio */
   padding-top: 30px; /* IE 6 workaround*/
   height: 0;
   overflow: hidden;
}

.embed-container iframe,
.embed-container object,
.embed-container embed {
   position: absolute;
   top: 0;
   left: 0;
   width: 100%;
   height: 100%;
}

This will work for all iframes. The only potential problem is that you will have to wrap all of the iframes on your website in a <div class="embed-container"> element. While this would work for developers who have total control over their code or for clients who are reasonably comfortable with HTML, it wouldn’t work for clients who have no technical skill. You could, of course, use some JavaScript to detect iframe elements and automatically embed them in the class. But as you can see, it’s still a major workaround and not a perfect solution.

Dealing With Responsive Video In Future

HTML5 opens a world of possibilities for video — particularly with the video element. The great news is that support for this element is amazingly good for mobile devices! Except for Opera Mini, most major browsers support it. The video element is also pretty flexible. Presenting a responsive video is as simple as this:


video {
   max-width: 100%;
   height: auto;
}

You’re probably asking, “What’s the problem, then?”

The problem is that, even though YouTube or Vimeo may support the video element, you still have to embed videos using the ugly iframe method. And that, my friend, sucks. Until YouTube and Vimeo provide a way to embed videos on websites using the HTML5 video tag, we have to find workarounds to make video embedding work on responsive websites. Chris Coyier created such a workaround as a jQuery plugin called FitVids.js. It uses the first technique mentioned above: creating a wrapper around the iframe to preserve the ratio.

Embedding Google Maps

If you embed a Google Map on your website, the technique described above with the container and CSS will work. But, again, it’s a dirty little hack. Moreover, the map will resize in proportion and might get so tiny that the map loses the focus area that you wanted to show to the user. The Google Maps’ page for mobile says that you can use the static maps API for mobile embedding. Using a static map would indeed make the iframe problems go away. Brad Frost wrote a nice article about, and created a demo of, adaptive maps, which uses this same technique. A JavaScript detects the screen’s size, and then the iframe is replaced by the static map for mobile phones. As you can tell, we again have to resort to a trick to deal with the iframe problem, in the absence of a “native” solution (i.e. from Google).

We Need Better APIs

And now the big question: Is there a better way? The biggest problem with using iframes to embed third-party content responsively is the lack of control over the generated code. Developers and designers are severely dependent on the third party and, by extension, its generated HTML. The number of websites that provide content to other websites is growing quickly. We’ll need much better solutions than iframes to embed this content.

Let’s face it: embedding Facebook’s iframe is a real pain. The lack of control over the CSS can make our work look very sloppy and can even sometimes ruin the design. The Web is a very open place, so perhaps now would be a good time to start thinking about more open APIs! In the future, we will need APIs that are better and simpler to use, so that anyone can embed content flexibly, without relying on unresponsive fixed iframes. Until all of those very big third parties decide to create those APIs, we are stuck with sloppy iframes and will have to resort to tricks to make them workable.

Responsive Navigation: An Overview Of Current Solutions

Another big challenge is what to do with navigation. The more complex and deep the architecture of the website, the more inventive we have to be.

An early attempt to deal with this in a simple way was to convert the navigation into a dropdown menu for small screens. Unfortunately, this was not ideal. First, this solution gets terribly complicated with multiple-level navigation. It can also cause some problems with accessibility. I recommend “Stop Misusing Select Menus” to learn about all of the problems such a technique can create.

Some people, including Brad Frost and Luke Wroblewski, have attempted to solving this problem. Brad Frost compiled some of his techniques on the website This Is Responsive, under the navigation section.

Toggle navigation involves hiding the menu for small devices, displaying only a “menu” link. When the user clicks on it, all of the other links appear as block-level elements below it, pushing the main content below the navigation.

A variant of this, inspired by some native application patterns, is off-canvas navigation. The navigation is hidden beneath a “menu” link or icon. When the user clicks the link, the navigation slides out as a panel from the left or right, pushing the main content over.

Toggle navigation example
Some examples of toggle navigation

The problem with these techniques is that the navigation remains at the top of the screen. In his article “Responsive Navigation: Optimizing for Touch Across Devices,” Luke Wroblewski illustrates which zones are easily accessible for different device types. The top left is the hardest to get to on a mobile device.

Easy touch access for mobile and tablet
Easily accessible screen areas on mobile phones and tablets, according to Luke Wroblewski.

Based on this, Jason Weaver created some demos with navigation at the bottom. One solution is a footer anchor, with navigation put at the bottom of the page for small devices, and a “menu” link that sends users there. It uses the HTML anchor link system.

Many other attempts have been made to solve the navigation problem in responsive Web design. As you can see, there is not yet a perfect solution; it really depends on the project and the depth of the navigation. Fortunately for us, some of the people who have tried to crack this nut have shared their experiences with the community.

Another unsolved issue is what icon to use to tell the user, “Hey! There’s a menu hidden under me. Click me!” Some websites have a plus symbol (+), some have a grid of squares, other have what looks like an unordered list, and some have three lines (aka the burger icon).

Some responsive icons example
To see these icons used on real websites, have a look at “We Need a Standard ‘Show Navigation’ Icon for Responsive Web Design.”

The main problem is figuring out which of these icons would be the most recognizable to the average user. If we all agreed to use one of them, users would be trained to recognize it. The problem is which to choose? I really would like to know which icon you use, so don’t hesitate to share it in the comments below.

Mobile Specificities: “Is The User On A Mobile Device? If So, What Can It Do?”

Mobile and tablet devices are a whole new world, far removed from desktop computers, with their own rules, behaviors and capabilities. We might want to adapt our designs to this new range of capabilities.

Detecting Touch Capabilities With Native JavaScript

Apart from screen size, I bet if you asked what is the main difference between desktop and mobile (including tablets), most people would say touch capability. There is no mouse on a mobile phone (no kidding!), and except for some rare hybrid devices into which you can plug a mouse, you can’t do much with mouse events on a tablet. This means that, depending on the browser, the :hover CSS pseudo-class might not work. Some browsers are clever enough to provide a native fallback for the hover event by translating it into a touch event. Unfortunately, not all browsers are so flexible. Creating a design that doesn’t depend on hidden elements being revealed on :hover events would be wise.

Catching touch events could also be another solution. A W3C working group has started working on a touch event specification. In the future, we will be able to catch events such as touchstart, touchmove and toucheend. We will be able to deal with these events directly in JavaScript without requiring a third-party framework such as Hammer.js or jGestures. But JavaScript is one thing — what about CSS?

CSS Level 4 “Pointer” Media Query

CSS Level 4 specifies a new media query called “pointer”, which can be used to query the presence and accuracy of a pointing device, such as a mouse. The media query takes one of three values:

  • none
    The device does not have any pointing device at all.
  • coarse
    The device has a pointing device with limited accuracy; for example, a mobile phone or tablet with touch capabilities, where the “pointer” would be a finger.
  • fine
    The device has an accurate pointing device, such as a mouse, trackpad or stylus.

Using this media query, we can enlarge buttons and links for touch devices:


@media  (pointer:coarse) {
   input[type="submit"],
       a.button {
       min-width: 30px;
       min-height: 40px;
       background: transparent;
   }
 }

The pointer media query is not yet supported and is merely being proposed. Nevertheless, the potential is huge because it would enable us to detect touch devices via CSS, without the need for a third-party library, such as Modernizr.

CSS Level 4 “Hover” Media Query

The CSS Level 4 specification proposes a new hover media query, which detects whether a device’s primary pointing system can hover. It returns a Boolean: 1 if the device supports hover, 0 if not. Note that it has nothing to do with the :hover pseudo-class.

Using the hover media query, we can enhance an interface to hide certain features for devices that do support hovering. The code would look something like this:


 @media  (hover) {
   .hovercontent { display: none; } /* Hide content only for devices with hover capabilities. */

   .hovercontent:hover { display: block; }    
 }

It can also be used to create dropdown menus on hover; and the fallback for mobile devices is in native CSS, without the need for a feature-detection framework.

CSS Level 4 Luminosity Media Query

Another capability of mobile devices is the luminosity sensor. The CSS Level 4 specification has a media query for luminosity, which gives us access to a device’s light sensors directly in the CSS. Here is how the specification describes it:

“The “luminosity” media feature is used to query about the ambient luminosity in which the device is used, to allow the author to adjust style of the document in response.”

In the future, we will be able to create websites that respond to ambient luminosity. This will greatly improve user experiences. We will be able to detect, for example, exceptionally bright environments using the washed value, adapting the website’s contrast accordingly. The dim value is used for dim environments, such as at nighttime. The normal value is used when the luminosity level does not need any adjustment.

The code would look something like this:


 @media  (luminosity: washed) {
   p { background: white; color: black; font-size: 2em; }
 }

As you can see, CSS Level 4 promises a lot of fun new stuff. If you are curious to see what’s in store, not only mobile-related, then have a look at “Sneak Peek Into the Future: Selectors, Level 4.”

More Mobile Capabilities to Detect Using APIs and JavaScript

Many other things could be detected to make the user experience amazing on a responsive website. For example, we could gain access to the native gyroscope, compass and accelerometer to detect the device’s orientation using the HTML5 DeviceOrientationEvent. Support for DeviceOrientationEvent in Android and iOS browsers is getting better, but the specification is still a draft. Nevertheless, the API look promising. Imagine playing full HTML5 games directly in the browser.

Another API that would be particularly useful for some mobile users is geolocation. The good news is that it’s already well supported. This API enables us to geolocate the user using GPS and to infer their location from network signals such as IP address, RFID, Wi-Fi and Bluetooth MAC addresses. This can be used on some responsive websites to provide users with contextual information. A big restaurant chain could enhance its mobile experience by showing the user the locations of restaurants in their area. The possibilities are endless.

The W3C also proposed a draft for a vibration API. With it, the browser can provide tactile feedback to the user in the form of vibration. This, however, is creeping into the more specific field of Web applications and mobile games in the browser.

Another API that has been highly discussed is the network information API. The possibility of measuring a user’s bandwidth and optimizing accordingly has seduced many developers. We would be able to serve high-quality images to users with high bandwidth and low-quality images to users with low bandwidth. With the bandwidth attribute of the network API, it would be possible to estimate the downloading bandwidth of a user in megabytes per second. The second attribute, metered, is a Boolean that tells us whether the user has a metered connection (such as from a prepaid card). These two attributes are currently accessible only via JavaScript.

Unfortunately, measuring a user’s connection is technically difficult, and a connection could change abruptly. A user could go into a tunnel and lose their connection, or their speed could suddenly drop. So, a magical media query that measures bandwidth looks hypothetical at the moment. Yoav Weiss has written a good article about the problems that such a media query would create and about bandwidth measurement, “Bandwidth Media Queries? We Don’t Need ’Em!

Many other APIs deal with mobile capabilities. If you are interested in learning more, Mozilla has a very detailed list. Most are not yet fully available or standardized, and most are intended more for Web applications than for responsive websites. Nevertheless, it’s a great overview of how large and complex mobile websites could get in future.

Rethinking The Way We And The User Deal With Content

From a technical perspective, there are still a lot of challenges in dealing with content at a global scale. The mobile-first method has been part of the development and design process for a little while now. We could, for example, serve to mobile devices the minimum data that is necessary, and then use JavaScript and AJAX to conditionally load more content and images for desktops and tablets. But to do this, we would also have to rethink how we deal with content and be able to prioritize in order to generate content that is flexible enough and adaptive. A good example of this is the responsive map solution described above: we load an image for mobile, and enhance the experience with a real map for desktops. The more responsive the website, the more complex dealing with content gets. Flexible code can help us to format adaptive content.

One way suggested by some people in the industry is to create responsive sentences by marking up sentences with a lot of spans that have classes, and then displaying certain ones according to screen size. Trimming parts of sentences for small devices is possible with media queries. You can see this technique in action on 37signals’ Signal vs. Noise blog and in Frankie Roberto’s article “Responsive Text.” Even if such technique could be used to enhance small parts of a website, such as the footer slogan, applying it to all of the text on a website is hard to imagine.

This raises an issue in responsive Web design that will become more and more important in future: the importance of meta data and the semantic structure of content. As mentioned, the content on our websites does not only come from in-house writers. If we want to be able to automatically reuse content from other websites, then it has to be well structured and prepared for it. New HTML5 tags such as article and section are a good start to gaining some semantic meaning. The point is to think about and structure content so that a single item (say, a blog post) can be reused and displayed on different devices in different formats.

The big challenge will be to make meta data easily understandable to all of the people who are part of the content creation chain of the website. We’ll have to explain to them how the meta data can be used to prioritize content and be used to programmatically assemble content, while being platform-independent. A huge challenge will be to help them start thinking in terms of reusable blocks, rather than a big chunk of text that they copy and paste from Microsoft Word to their WYSIWYG content management system. We will have to help them understand that content and structure are two separate and independent things, just as when designers had to understand that content (HTML) and presentation (CSS) are best kept separate.

We can’t afford to write content that is oriented towards one only platform anymore. Who knows on which devices our content will be published in six months, or one year? We need to prepare our websites for the unexpected. But to do so, we need better publishing tools, too. Karen McGrane gave a talk on “Adapting Ourselves to Adaptive Content,” with some real example from the publishing industry. She speaks about the process of creating reusable content and introduces the idea of COPE: create once and publish everywhere. We need to build better CMSes, ones that can use and generate meta data to prioritize content. We need to explain to people how the system works and to think in terms of modular reusable content objects, instead of WYSIWYG pages. As McGrane says:

“You might be writing three different versions of that headline; you might be writing two different short summaries and you are attaching a couple of different images to it, different cut sizes, and then you may not be the person who is in charge of deciding what image or what headline gets displayed on that particular platform. That decision will be made by the metadata. It will be made by the business rules. […] Metadata is the new art direction.”

Truncating content for small devices is not a future-proof content strategy. We need CMSes that provide the structure needed to create such reusable content. We need better publishing workflows in CMSes, too. Clunky interfaces scare users, and most people who create content are not particularly comfortable with complicated tools. We will have to provide them with tools that are easy to understand and that help them publish clean, semantic content that is independent of presentation.

Conclusion

As long as this article is, it only scratches the surface. By now, most of Smashing Magazine’s readers understand that responsive Web design is much more than about throwing a bunch of media queries on the page, choosing the right breakpoints and doubling the size of images for those cool new high-density phones. As you can see, the path is long, and we are not there yet. There are still many unsolved issues, and the perfect responsive solution does not exist yet.

Some technical solutions might be discovered in future using some of the new technologies presented here and with the help of the W3C, the WHATWG and organizations such as the Filament Group.

More importantly, we Web designers and developers can help to find even better solutions. People such as Luke Wroblewski and Brad Frost and all of the amazing women and men mentioned in this article are experimenting with a lot of different techniques and solutions. Whether any succeeds or fails, the most important thing is to share what we — as designers, developers, content strategists and members of the Web design community — are doing to try to solve some of the challenges of responsive Web design. After all, we are all in the same boat, trying to make the Web a better place, aren’t we?

(al) (ea)

© Stéphanie Walter for Smashing Magazine, 2013.

0
Your rating: None
Original author: 
Peter Kafka

Thanks to Quartz’s Zach Seward for jogging my memory about this oldie and goodie: Tumblr’s David Karp in a video interview taped in 2007, when he was 21, had 75,000 users and was talking about stuff like Digg, Flickr … and Twitter.

Karp’s interviewer is Howard Lindzon, who’s now known as the guy behind StockTwits. Assuming that the interview was taped close to the time it was published, it would have meant that the two men were talking as Karp was raising his first funding round of $750,000, led by Union Square Ventures and Spark Capital.

No need to say anything else:

0
Your rating: None
Original author: 
Ina Fried

Although Google is offering a limited set of developer tools for Glass — and more are on the way — the company doesn’t want to stop hackers from tinkering even further.

google_glass_penguin

Indeed, during a developer conference session on Thursday, Google showed a variety of ways to gain deeper access to Glass. Some, such as running basic Android apps and even connecting a Bluetooth keyboard, can be done.

Google showed other hacks, such as running a version of Ubuntu Linux. Those actions, though, require deeper “root” access to the device. Google showed how developers can get such access, but cautions that doing so voids the warranty and could be irreversible.

That said, Google plans to make its factory image available so in most cases rooted Glass devices should be able to be returned to their original settings.

The session ended with a video showing a pair of the pricey specs being blended to a powdery mess, to heartfelt groans from the packed audience, many of whom forked over $1,500 to be among the first to buy the developer edition of Glass.

Showing a different level of interest in Glass, several members of Congress sent a letter to Google CEO Larry Page on Thursday asking questions about privacy issues raised by the high-tech specs.

Update: At a follow-up Fireside Chat session with developers, Google reiterated that a software development kit for Glass is coming, but Google’s Charles Mendis said not to expect it soon.

Isabelle Olsson, the lead designer for Glass, showed off one of the bulky early prototype designs for Glass as well as a current prototype that combines Glass with prescription glasses.

Prescription Google Glass prototype

Prescription Google Glass prototype

Olsson, who quips that she has been working on Glass since it was a phone attached to a scuba mask, said that the development of Glass was “so ambitious and very messy.”

Getting the device light enough has been a key, Olsson said.

“If it is not light you are not going to want to wear it for more than 10 minutes,” Olsson said. “We care about every gram.”

Asked what kind of apps the Glass team would like to see, Olsson said she wanted a karaoke app, while Mendis said he would like to see some fitness apps.

Google Glass product director Steve Lee said Glass is designed around brief glances or “micro-interactions,” rather than watching a movie or reading an entire book.

“That would be painful,” Lee said. “We don’t want to create zombies staring into the screen for long periods of time.

RELATED POSTS:

0
Your rating: None
Original author: 
Sean Gallagher

Think mobile devices are low-power? A study by the Center for Energy-Efficient Telecommunications—a joint effort between AT&T's Bell Labs and the University of Melbourne in Australia—finds that wireless networking infrastructure worldwide accounts for 10 times more power consumption than data centers worldwide. In total, it is responsible for 90 percent of the power usage by cloud infrastructure. And that consumption is growing fast.

The study was in part a rebuttal to a Greenpeace report that focused on the power consumption of data centers. "The energy consumption of wireless access dominates data center consumption by a significant margin," the authors of the CEET study wrote. One of the findings of the CEET researchers was that wired networks and data-center based applications could actually reduce overall computing energy consumption by allowing for less powerful client devices.

According to the CEET study, by 2015, wireless "cloud" infrastructure will consume as much as 43 terawatt-hours of electricity worldwide while generating 30 megatons of carbon dioxide. That's the equivalent of 4.9 million automobiles worth of carbon emissions. This projected power consumption is a 460 percent increase from the 9.2 TWh consumed by wireless infrastructure in 2012.

Read 1 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Kara Swisher

url

Earlier today, Yahoo said it had acquired the trendy and decidedly stylish news reading app Summly, along with its telegenic and very young entrepreneur Nick D’Aloisio.

Yahoo said it plans to close down the actual app and use the algorithmic summation technology that the 17-year-old D’Aloisio built with a small team of five, along with a major assist from Silicon Valley research institute SRI International, throughout its products.

While Yahoo did not disclose the price, several sources told me that the company paid $30 million — 90 percent in cash and 10 percent in stock — to buy the London-based Apple smartphone app.

And despite its elegant delivery, that’s a very high price, especially since Summly has been downloaded slightly less than one million times since launch — after a quick start amid much publicity over its founder — with about 90 million “summaries” read. Of course, like many such apps, it also had no monetization plan as yet.

What Yahoo is getting, though, is perhaps more valuable — the ability to put the fresh-faced D’Aloisio front and center of its noisy efforts to make consumers see Yahoo as a mobile-first company. That has been the goal of CEO Marissa Mayer, who has bought up a range of small mobile startups since she took over nine months ago and who has talked about the need for Yahoo to focus on the mobile arena above all.

Mayer met with D’Aloisio, said sources, although the deal was struck by voluble M&A head Jackie Reses.

Said one person close to the deal, about the founder: “Nick will be a great person to put in front of the media and consumers with Mayer to make Yahoo seem like it is a place that loves both entrepreneurs and mobile experiences, which in turn will presumably attract others like him.”

Having met the young man in question, who was in San Francisco in the fall on a fundraising trip, I can see the appeal. He’s both well-spoken and adorkable, as well as very adept at charming cranky media types like me by radiating with the kinetic energy of someone born in the mobile world (you can see that in full force in the video below with actor and Summly investor Stephen Fry).

Still, D’Aloisio is very young and presumably has a lot of other entrepreneurial goals and that’s why he agreed as part of the deal to only officially stay 18 months at Yahoo, multiple sources told me. In many cases, startup founders strike such short-term employment deals with big companies, agreeing to stay for a certain determined time period.

He will also remain in England, where he lives with his parents, said sources. In addition, only two of Summly’s employees will go to Yahoo with D’Aloisio.

That’s $10 million each, along with a nifty app Yahoo will not be using as is (too bad, as it would up the hip and fun factor of Yahoo’s apps by a factor of a gazillion if it were maintained).

“It works out on a lot of levels,” said another person close to the situation. “Nick is a founder that will make Mayer and Yahoo look cutting edge.”

Cue the parade of PR profiles of the young genius made millionaire, helping Yahoo become relevant again.

I have an email for comment into the always friendly D’Aloisio. But I don’t expect a reply, since he has apparently been specifically instructed by the martinets of Yahoo PR not to talk to me any longer — well, for 18 months at least! (Don’t worry, Nick, I don’t blame you and will still listen to whatever you are pitching next, since you are so dang compelling and I enjoyed using Summly!)

Until then, here’s the faboo Summly video, with the best chairs ever:

Summly Launch from Summly on Vimeo.

0
Your rating: None
Original author: 
Ben Rooney

It was hard to avoid the message at the recent Mobile World Congress in Barcelona. The GSMA, the organizing body, was keen for everyone to believe that Near Field Communication might finally be about to have its day.

NFC has been a decade in the making, and has always been about to be “The Next Big Thing.” It is a contactless radio technology that can transmit data between two devices within a few centimeters of each other. Coupled with a security chip to encrypt data, it promises to transform a wide range of consumer experiences from simple ticketing to the Holy Grail of replacing your cash and payment cards with just your smartphone. The key word there is “promise.”

Read the rest of this post on the original site »

0
Your rating: None