Skip navigation
Help

User interface techniques

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

A video of a new virtual reality prototype that uses both Oculus Rift and Razer Hydra technology shows how users can use their mind to move around artificial environments.

Users require an Emotiv EPOC, a device capable of mapping certain thought patterns to actions, to read their brainwaves in order to interact with the prototype through thought. Naturally, this is still in its experimental phase; While virtual reality technology is becoming affordable, the Emotiv EPOC's capabilities are "still quite primitive," and not wholly user friendly, writes developer Chris Zaharia.

"With my experience in the education industry through my startup Zookal and keen interest in neuroscience, I had a thought around how these technologies could be used together to enhance education and at the same time, see how far can we go with using cognitive control in a virtual simulation," he writes.

Zaharia hopes to explore the possibility of using virtual reality for educational purposes ranging from engineering to biology. The demo offers a look at what is currently possible using virtual reality headsets, motion tracking through Razer Hydra and cognitive control in virtual simulations.

Tap for more stories

0
Your rating: None
Original author: 
Thomas Joos

  

As a mobile UI or UX designer, you probably remember the launch of Apple’s first iPhone as if it was yesterday. Among other things, it introduced a completely touchscreen-centered interaction to a individual’s most private and personal device. It was a game-changer.

Today, kids grow up with touchscreen experiences like it’s the most natural thing. Parents are amazed by how fast their children understand how a tablet or smartphone works. This shows that touch and gesture interactions have a lot of potential to make mobile experiences easier and more fun to use.

Challenging Bars And Buttons

The introduction of “Human Interface Guidelines” and Apple’s App Review Board had a great impact on the quality of mobile applications. It helped a lot of designers and developers understand the core mobile UI elements and interactions. One of Apple’s popular suggestions, for instance, is to use UITabBar and UINavigationBar components — a guideline that many of us have followed, including me.

In fact, if you can honestly say that the first iPhone application you designed didn’t have any top or bottom bar elements, get in touch and send over a screenshot. I will buy you a beer and gladly tweet that you were ahead of your time.

My issue with the top and bottom bars is that they fill almost 20% of the screen. When designing for a tiny canvas, we should use every available pixel to focus on the content. In the end, that’s what really matters.

In this innovative industry, mobile designers need some time to explore how to design more creative and original interfaces. Add to that Apple’s frustrating rejection of apps that “think outside the box,” it is no surprise that experimental UI and UX designs such as Clear and Rise took a while to see the light of day. But they are here now. And while they might be quite extreme and focused on high-brow users and early adopters, they show us the great creative potential of gesture-driven interfaces.

Rise and Clear
Pulling to refresh feels very intuitive.

The Power Of Gesture-Driven Interfaces

For over two years now, I’ve been exploring the ways in which gestures add value to the user experience of a mobile application. The most important criterion for me is that these interactions feel very intuitive. This is why creative interactions such as Loren Brichter’s “Pull to Refresh” have become a standard in no time. Brichter’s interaction, introduced in Tweetie for iPhone, feels so intuitive that countless list-based applications suddenly adopted the gesture upon its appearance.

Removing UI Clutter

A great way to start designing a more gesture-driven interface is to use your main screen only as a viewport to the main content. Don’t feel obliged to make important navigation always visible on the main screen. Rather, consider giving it a place of its own. Speaking in terms of a virtual 2-D or 3-D environment, you could design the navigation somewhere next to, below, behind, in front of, above or hidden on top of the main view. A dragging or swiping gesture is a great way to lead the user to this UI element. It’s up to you to define and design the app.

What I like about Facebook and Gmail on iOS, for instance, is their implementation of a “side-swiping” menu. This trending UI concept is very easy to use. Users swipe the viewport to the right to reveal navigation elements. Not only does this make the app very content-focused, but accessing any section of the application takes only two to three touch interactions. A lot of apps do far worse than that!

Sideswipe Menu
Facebook and Gmail’s side-swiping menu

In addition to the UI navigation, your app probably also supports contextual interactions, too. Adding the same two or three buttons below every content item will certainly clutter the UI! While buttons might seem to be useful triggers, gestures have great potential to make interaction with content more intuitive and fun. Don’t hesitate to integrate simple gestures such as tapping, double-tapping and tapping-and-holding to trigger important interactions. Instagram supports a simple double-tap to perform one of its key features, liking and unliking a content item. I would not be surprised to see other apps integrate this shortcut in the near future.

An Interface That Fits

When designing an innovative mobile product, predicting user behavior can be very difficult. When we worked with Belgium’s Public Radio, my team really struggled with the UI balance between music visualization and real-time news. The sheer number of contextual scenarios and preferences made it very hard to come up with the perfect UI. So, we decided to integrate a simple dragging gesture to enable users to adjust the balance themselves.

Radio+
By dragging, users can balance music-related content and live news.

This gesture adds a creative contextual dimension to the application. The dragging gesture does not take the user from one section (news or music) to another. Rather, it enables the user to focus on the type of content they are most interested in, without missing out on the other.

Think in Terms of Time, Dimension and Animation

What action is triggered when the user taps an item? And how do you visualize that it has actually happened? How fast does a particular UI element animate into the viewport? Does it automatically go off-screen after five seconds of no interaction?

The rise of touch and gesture-driven devices dramatically changes the way we design interaction. Instead of thinking in terms of screens and pages, we are thinking more in terms of time, dimension and animation. You’ve probably noticed that fine-tuning user interactions and demonstrating them to colleagues and clients with static wireframe screenshots is not easy. You don’t fully see, understand and feel what will happen when you touch, hold, drag and swipe items.

Certain prototyping tools, including Pop and Invision, can help bring wireframes to life. They are very useful for testing an application’s flow and for pinpointing where and when a user might get stuck. Your application has a lot more going on than simple back-and-forth navigation, so you need to detect interface bugs and potential sources of confusion as soon as possible. You wouldn’t want your development team to point them out to you now, would you?

InvisionApp
Invision enables you to import and link your digital wireframes.

To be more innovative and experimental, get together with your client first and explain that a traditional wireframe is not the UX deliverable that they need. Show the value of interactive wireframes and encourage them to include this in the process. It might increase the timeline and budget, but if they are expecting you to go the extra mile, it shouldn’t be a problem.

I even offer to produce a conceptual interface video for my clients as well, because once they’ve worked with the interactive wireframes and sorted out the details, my clients will often need something sexier to present to their internal stakeholders.

The Learning Curve

When designing gesture-based interactions, be aware that every time you remove UI clutter, the application’s learning curve goes up. Without visual cues, users could get confused about how to interact with the application. A bit of exploration is no problem, but users should know where to begin. Many apps show a UI walkthrough when first launched, and I agree with Max Rudberg that walkthroughs should explain only the most important interactions. Don’t explain everything at once. If it’s too explicit and long, users will skip it.

Why not challenge yourself and gradually introduce creative UI hints as the user uses the application? This pattern is often referred to as progressive disclosure and is a great way to show only the information that is relevant to the user’s current activity. YouTube’s Capture application, for instance, tells the user to rotate the device to landscape orientation just as the user is about to open the camera for the first time.

Visual Hints
Fight the learning curve with a UI walkthrough and/or visual hints.

Adding visual cues to the UI is not the only option. In the Sparrow app, the search bar appears for a few seconds, before animating upwards and going off screen, a subtle way to say that it’s waiting to be pulled down.

Stop Talking, Start Making

The iPhone ushered in a revolution in interactive communication. Only five years later, touchscreen devices are all around us, and interaction designers are redefining the ways people use digital content.

We need to explore and understand the potential of touch and gesture-based interfaces and start thinking more in terms of time, dimension and animation. As demonstrated by several innovative applications, gestures are a great way to make an app more content-focused, original and fun. And many gesture-based interactions that seem too experimental at first come to be seen as very intuitive.

For a complete overview of the opportunities for gestures on all major mobile platforms, check out Luke Wroblewski’s “Touch Gesture Reference Overview.” I hope you’re inspired to explore gesture-based interaction and intensify your adventures in mobile interfaces. Don’t be afraid to go the extra mile. With interactive wireframes, you can iterate your way to the best possible experience. So, let’s stop talking and start making.

(al)

© Thomas Joos for Smashing Magazine, 2013.

0
Your rating: None
Original author: 
Sean Hollister

2013-05-17_07-08-36-1020_large

Three months ago, celebrated video game publisher Valve did something completely out of character: it fired up to 25 workers, in what one employee dubbed the "great cleansing." At the time, co-founder Gabe Newell quickly reassured gamers that the company wouldn't be canceling any projects, but it just so happens that one project managed to get away.

Valve was secretly working on a pair of augmented reality glasses... and those glasses are still being built by two Valve employees who lost their jobs that day.

"This is what I'm going to build come hell or high water."

Former Valve hardware engineer Jeri Ellsworth and programmer Rick Johnson spent over a year working on the project at Valve, and have been putting in six days a week, 16+...

Continue reading…

0
Your rating: None
Original author: 
Andrew Cunningham

Aurich Lawson / Thinkstock

Welcome back to our three-part series on touchscreen technology. Last time, Florence Ion walked you through the technology's past, from the invention of the first touchscreens in the 1960s all the way up through the mid-2000s. During this period, different versions of the technology appeared in everything from PCs to early cell phones to personal digital assistants like Apple's Newton and the Palm Pilot. But all of these gadgets proved to be little more than a tease, a prelude to the main event. In this second part in our series, we'll be talking about touchscreens in the here-and-now.

When you think about touchscreens today, you probably think about smartphones and tablets, and for good reason. The 2007 introduction of the iPhone kicked off a transformation that turned a couple of niche products—smartphones and tablets—into billion-dollar industries. The current fierce competition from software like Android and Windows Phone (as well as hardware makers like Samsung and a host of others) means that new products are being introduced at a frantic pace.

The screens themselves are just one of the driving forces that makes these devices possible (and successful). Ever-smaller, ever-faster chips allow a phone to do things only a heavy-duty desktop could do just a decade or so ago, something we've discussed in detail elsewhere. The software that powers these devices is more important, though. Where older tablets and PDAs required a stylus or interaction with a cramped physical keyboard or trackball to use, mobile software has adapted to be better suited to humans' native pointing device—the larger, clumsier, but much more convenient finger.

Read 22 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Scott Gilbertson

Hybrids. Image: Screenshot/Webmonkey.

The advent of hybrid laptops that double as tablets or offer some sort of touch input has greatly complicated the life of web developers.

A big part of developing for today’s myriad screens is knowing when to adjust the interface, based not just on screen size, but other details like input device. Fingers are far less precise than a mouse, which means bigger buttons, form fields and other input areas.

But with hybrid devices like touch screen Windows 8 laptops or dockable Android tablets with keyboards, how do you know whether the user is browsing with a mouse or a finger?

Over on the Mozilla Hacks blog Patrick Lauke tackles that question in an article on detecting touch-capable devices. Lauke covers the relatively simple case of touch-only, like iOS devices, before diving into the far more complex problem of hybrid devices.

Lauke’s answer? If developing for the web hasn’t already taught you this lesson, perhaps hybrid devices will — learn to live with uncertainty and accept that you can’t control everything.

What’s the solution to this new conundrum of touch-capable devices that may also have other input methods? While some developers have started to look at complementing a touch feature detection with additional user agent sniffing, I believe that the answer – as in so many other cases in web development – is to accept that we can’t fully detect or control how our users will interact with our web sites and applications, and to be input-agnostic. Instead of making assumptions, our code should cater for all eventualities.

While learning to live with uncertainty and providing interfaces that work with any input sounds nice in theory, developers are bound to want something a bit more concrete. There’s some hope on the horizon. Microsoft has proposed the Pointer Events spec (and created a build of Webkit that supports it). And the CSS Media Queries Level 4 spec will offer a pointer query to see what sort of input device is being used (mouse, finger, stylus etc).

Unfortunately, neither Pointer Events nor Media Queries Level 4 are supported in today’s browsers. Eventually there probably will be some way to easily detect and know for certain which input device is being used, but for the time being you’re going to have to live with some level of uncertainty. Be sure to read through Lauke’s post for more details and some sample code.

0
Your rating: None
Original author: 
Florence Ion

Aurich Lawson / Thinkstock

It's hard to believe that just a few decades ago, touchscreen technology could only be found in science fiction books and film. These days, it's almost unfathomable how we once got through our daily tasks without a trusty tablet or smartphone nearby, but it doesn't stop there. Touchscreens really are everywhere. Homes, cars, restaurants, stores, planes, wherever—they fill our lives in spaces public and private.

It took generations and several major technological advancements for touchscreens to achieve this kind of presence. Although the underlying technology behind touchscreens can be traced back to the 1940s, there's plenty of evidence that suggests touchscreens weren't feasible until at least 1965. Popular science fiction television shows like Star Trek didn't even refer to the technology until Star Trek: The Next Generation debuted in 1987, almost two decades after touchscreen technology was even deemed possible. But their inclusion in the series paralleled the advancements in the technology world, and by the late 1980s, touchscreens finally appeared to be realistic enough that consumers could actually employ the technology into their own homes. 

This article is the first of a three-part series on touchscreen technology's journey to fact from fiction. The first three decades of touch are important to reflect upon in order to really appreciate the multitouch technology we're so used to having today. Today, we'll look at when these technologies first arose and who introduced them, plus we'll discuss several other pioneers who played a big role in advancing touch. Future entries in this series will study how the changes in touch displays led to essential devices for our lives today and where the technology might take us in the future. But first, let's put finger to screen and travel to the 1960s.

Read 51 remaining paragraphs | Comments

0
Your rating: None

Augmented reality for mobile devices has grown in popularity in recent years partly because of the proliferation of smart phones and tablet computers equipped with exceptional cameras and partly because of developments in computer vision algorithms that make implementing such technologies on embedded systems possible.

Said augmented reality applications have always been limited to a single user receiving additional information about a physical entity or interacting with a virtual agent. Researchers at MIT’s Media Lab have taken augmented reality to the next level by developing a multi-user collaboration tool that allows users to augment reality and share that we other users essentially turning the real world into a digital canvas for all to share.

The Second Surface project as it is known is described as,

…a novel multi-user Augmented reality system that fosters a real-time interaction for user-generated contents on top of the physical environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. The system allows users to place three dimensional drawings, texts, and photos relative to such objects and share this expression with any other person who uses the same software at the same spot.

If you still have difficulty understanding how this works and why I believe when made available to the general masses it will be a game changing technology for augmented reality and mobile devices, check out the following explanatory video.

Now, imagine combining this technology with Google Glass and free-form gesture recognition. How awesome would that be?

[source]

0
Your rating: None

Like the Roman god Janus (and many a politician), every web application has two faces: Its human face interacts with people, while its machine face interacts with computer systems, often as a result of those human interactions. Showing too much of either face to the wrong audience creates opportunity for error.

When a user interface—intended for human consumption—reflects too much of a system’s internals in its design and language, it’s likely to confuse the people who use it. But at the same time, if data doesn’t conform to a specific structure, it’s likely to confuse the machines that need to use it—so we can’t ignore system requirements, either.

People and machines parse information in fundamentally different ways. We need to find a way to balance the needs of both.

Enter the Robustness Principle

In 1980, computer scientist Jon Postel published an early specification for the Transmission Control Protocol, which remains the fundamental communication mechanism of the internet. In this spec, he gave us the Robustness Principle:

Be conservative in what you do, be liberal in what you accept from others.

Although often applied to low-level technical protocols like TCP, this golden rule of computing has broad application in the field of user experience as well.

To create a positive experience, we need to give applications a human face that’s liberal: empathetic, flexible, and tolerant of any number of actions the user might take. But for a system to be truly robust, its machine face must also take great care with the data it handles— treating user input as malicious by default, and validating the format of everything it sends to downstream systems.

Building a system that embraces these radically different sets of constraints is not easy. At a high level, we might say that a robust web application is one that:

  1. Accepts input from users in a variety of forms, based first on the needs and preferences of humans rather than machines.
  2. Accepts responsibility for translating that human input to meet the requirements of computer systems.
  3. Defines boundaries for what input is reasonable in a given context.
  4. Provides clear feedback to the user, especially when the translated input exceeds the defined boundaries.

Whether it’s a simple form or a sophisticated application, anytime we ask users for input, their expectations are almost certainly different from the computer’s in some way. Our brains are not made of silicon. But thinking in terms of the Robustness Principle can help us bridge the gap between human and machine in a wide range of circumstances.

Numbers

Humans understand the terms “one,” “1,” and “1.00” to be roughly equivalent. They are very different to a computer, however. In most programming languages, each is a different type of data with unique characteristics. Trying to perform math on the wrong kind of data could lead to unexpected results. So if a web application needs the user to enter a number, its developers want to be sure that input meets the computer’s definition. Our users don’t care about such subtleties, but they can easily bubble up into our user interfaces.

When you buy something over the phone, the person taking your order never has to say, “Please give me your credit card number using only digits, with no spaces or dashes.” She is not confused if you pause while speaking or include a few “umms.” She knows a number when she hears one. But such prompts commonly litter web forms, instructing users to cater to the computer’s needs. Wouldn’t it be nice if the computer could cater to the person’s needs instead?

It often can, if we put the Robustness Principle to work to help our application take a variety of user input and turn it into something that meets the needs of a machine.

For example, we could do this right at the interface level by modifying fields to pre-process the user’s input, providing immediate feedback to the user about what’s happening. Consider an input field that’s looking for a currency value:

Form input requesting a currency value

HTML 5 introduces some new attributes for the input element, including a type of number and a pattern attribute, intended to give developers a way to define the expected format of information. Unfortunately, browser support for these attributes remains limited and inconsistent. But a bit of JavaScript can do the same work. For example:

<input onkeyup="value=value.replace(/[^0-9\.]/g,'')" />
<input onblur="if(value.match(/[^0-9\.]/)) raise_alert(this)" />

The first input simply blocks any characters that are not digits or decimal points from being entered by the user. The second triggers a notification instead.

We can make these simple examples far more sophisticated, but such techniques still place the computer’s rules in the user’s way. An alternative might be to silently accept anything the user chooses to provide, and then use the same regular expressions1 to process it on the server into a decimal value. Following guideline number three, the application would perform a sanity check on the result and report an error if a user entered something incomprehensible or out of the expected range.

Our application’s liberal human face will assume that these events are the exception: If we’ve designed and labeled our interfaces well, most people will provide reasonable input most of the time. Although precisely what people enter (“$10.00” or “10”) may vary, the computer can easily process the majority of those entries to derive the decimal value it needs, whether inline or server-side. But its cautious, machine-oriented face will check that assumption before it takes any action. If the transaction is important, like when a user enters the amount of a donation, the system will need to provide clear feedback and ask for confirmation before proceeding, even if the value falls within the boundaries of normalcy. Otherwise, aggressive reduction of text to a number could result in an unexpected—and potentially very problematic—result for our user:

Overly aggressive reduction of text input to a number leads to unexpected results

Dates

To a computer, dates and times are just a special case of numbers. In UNIX-based systems, for example, time is often represented as the number of seconds that have elapsed since January 1, 1970.

For a person, however, context is key to interpreting dates. When Alice asks, “Can we meet on Thursday?” Bob can safely assume that she means the next Thursday on the calendar, and he certainly doesn’t have to ask if she means Thursday of last week. Interface designers should attempt to get as close to this human experience as possible by considering the context in which a date is required.

We can do that by revisiting some typical methods of requesting a date from users:

  • A text input, often with specific formatting requirements (MM/DD/YYYY, for example)
  • A miniature calendar widget, arranging dates in a month-by-month grid

These patterns are not mutually exclusive, and a robust application might offer either or both, depending on the context.

There are cases where the calendar widget may be very helpful, such as identifying a future date that’s not known (choosing the second Tuesday of next February). But much of the time, a text input probably offers the fastest path to entering a known date, especially if it’s in the near future. If Bob wants to make a note about Thursday’s meeting, it seems more efficient for him to type the word “Thursday” or even the abbreviation “Thu” than to invoke a calendar and guide his mouse (or worse, his fingertip on a touchscreen) to the appropriate tiny box.

But when we impose overly restrictive formatting requirements on the text, we undermine that advantage—if Bob has to figure out the correct numeric date, and type it in a very specific way, he might well need the calendar after all. Or if an application requests Alice’s birthdate in MM/DD/YYYY format, why should it trigger an error if she types 1/1/1970, omitting the leading zeroes? In her mind, it’s an easily comprehensible date.

An application embracing the Robustness Principle would accept anything from the user that resembles a date, again providing feedback to confirm her entry, but only reporting it as a problem if the interpretation fails or falls out of bounds. A number of software libraries exist to help computers translate human descriptions of dates like “tomorrow,” “next Friday,” or “11 April” into their structured, machine-oriented equivalents. Although many are quite sophisticated, they do have limitations—so when using them, it’s also helpful to provide users with examples of the most reliable patterns, even though the system can accept other forms of input.

Addresses

Perhaps more often than any other type of input, address fields tend to be based on database design rather than the convenience of human users. Consider this common layout:

Typical set of inputs for capturing an address

This set of fields may cover the majority of cases for U.S. addresses, but it doesn’t begin to scratch the surface for international users. And even in the U.S., there are legitimate addresses it won’t accommodate well.

An application that wants to accept human input liberally might take the daring approach of using a single textarea to capture the address, allowing the user to structure it just as he or she would when composing a letter. And if the address will only ever be used in its entirety, storing it as a single block of text may be all that’s required. It’s worth asking what level of detail is truly needed to make use of the data.

Often we have a clear business need to store the information in discrete fields, however. There are many web-based and local services that can take a variety of address formats and standardize them, whether they were collected through a single input or a minimal set of structured elements.

Consider the following address:

Avenue Appia 20
1211 Genève 27
SUISSE

The Google Geocoding API, for example, might translate it to something like the following, with a high level of granularity for mapping applications:

"address_components" : [
  {
     "long_name" : "20",
     "short_name" : "20",
     "types" : [ "street_number" ]
  },
  {
     "long_name" : "Avenue Appia",
     "short_name" : "Avenue Appia",
     "types" : [ "route" ]
  },
  {
     "long_name" : "Geneva",
     "short_name" : "Geneva",
     "types" : [ "locality", "political" ]
  },
  {
     "long_name" : "Genève",
     "short_name" : "Genève",
     "types" : [ "administrative_area_level_2", "political" ]
  },
  {
     "long_name" : "Geneva",
     "short_name" : "GE",
     "types" : [ "administrative_area_level_1", "political" ]
  },
  {
     "long_name" : "Switzerland",
     "short_name" : "CH",
     "types" : [ "country", "political" ]
  },
  {
     "long_name" : "1202",
     "short_name" : "1202",
     "types" : [ "postal_code" ]
  }
]

The details (and license terms) of such standardization systems will vary and may not be appropriate for all applications. Complex addresses may be a problem, and we’ll need to give the application an alternate way to handle them. It will be more work. But to achieve the best user experience, it should be the application’s responsibility to first try to make sense of reasonable input. Users aren’t likely to care whether a CRM database wants to hold their suite number separately from the street name.

The exception or the rule

Parsing human language into structured data won’t always work. Under guideline number four, a robust system will detect and handle edge cases gracefully and respectfully, while working to minimize their occurrence. This long tail of user experience shouldn’t wag the proverbial dog. In other words, if we can create an interface that works flawlessly in 95 percent of cases, reducing the time to complete tasks and showing a level of empathy that surpasses user expectations, it’s probably worth the effort it takes to build an extra feedback loop to handle the remaining five percent.

Think again about the process of placing an order over the phone, speaking to a human being. If she doesn’t understand something you say, she may ask you to clarify. Even when she does understand, she may read the the details back to you to confirm. Those interactions are normal and usually courteous. In fact, they reassure us all that the end result of the interaction will be what we expect.

She is not, however, likely to provide you with a set of rigid instructions as soon as she answers the phone, and then berate you for failing to meet some of them. And yet web applications create the equivalent interaction all the time (sometimes skipping past the instructions and going directly to the berating).

For most developers, system integrity is an understandably high priority. Better structure in user-supplied data means that we can handle it more reliably. We want reliable systems, so we become advocates for the machine’s needs. When input fails to pass validation, we tend to view it as a failure of the user—an error, an attempt to feed bad data into our carefully designed application.

But whether or not our job titles include the phrase “user experience,” we must advocate at least as much for the people who use our software as we do for computer systems. Whatever the problem a web application is solving, ultimately it was created to benefit a human being. Everyone who has a hand in building an application influences the experience, so improving it is everyone’s job. Thinking in terms of robustness can help us balance the needs of both people and computers.

Postel’s Law has proven its worth by running the internet for more than three decades. May we all hold our software—and the experiences it creates—to such a high standard.

0
Your rating: None

Sean Hollister Oculus Rift STOCK

We just gave the Oculus Rift, a virtual reality headset, our Best of CES award. Guess who else is experimenting with virtual reality? Valve Software. At the 2013 Game Developers Conference in San Francisco, California, the same renowned video game publisher that's hard at work on the Steambox will also share its thoughts on VR, after spending a full year prototyping ways to create virtual reality hardware and software. Valve will host two 25-minute lectures entitled "Why Virtual Reality is Hard (And Where it Might be Going)" and "What We Learned Porting Team Fortress 2 to Virtual Reality" at the conference.

The former is hosted by Michael Abrash, the man behind Valve's mystery wearable computing hardware project... and the latter...

Continue reading…

0
Your rating: None