Skip navigation
Help

Human–computer interaction

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

933924 512868958750541 697525497 n First official Brain Computer Interface journal coming in January 2014

At last, there will be a printed journal where BCI researchers can submit their work to. It is called the Brain-Computer Interfaces published by Taylor & Francis, an international company originating in the UK that publishes books and academic journals. The BCI journal was announced and its importance was discussed at the recent BCI meeting at Pacific Grove, California.

The new BCI journal will have four issues a year. The first issue is planned to be published in January 2014.

The journal will focus on the following areas:

  • Development and user-centered evaluation of engineered BCI applications with emphasis on the analysis of what aspects are crucial to making the system work, in addition to straightforward assessment of its success.
  • Scientific investigation of patterns of brain activity that can, or show promise to be able to, be used to drive BCI applications.
  • Development and evaluation of signal processing methods that extract signal features, classify them, and otherwise translate brain signals into device commands.
  • New invasive and noninvasive methods to monitor and acquire brain signals.
  • Applications of BCI technology to understand human perception, affect, action, and various aspects of cognition and behavior.
  • Ethical and sociological implications of brain-computer interfacing applications.
  • Human factors and human-computer interaction (HCI) concerns in the design, development and evaluation of BCIs.
  • Clinical trials and individual case studies of the experimental therapeutic application of BCIs.
  • Behavioral studies of BCI use in humans and animals.
  • Studies of neurosurgical techniques relevant to BCIs.
  • Proposal, review and analysis of standards for BCI hardware, software and protocols.

The new printed journal is clearly a great opportunity for the whole BCI community to get together and have a more organized publication standard. The contribution towards this journal will help the BCI community to find other researchers for collaboration more easily.

If you would like to have your paper for consideration, contact the co-editors Chang Nam and Jeremy Hill.

0
Your rating: None

Bismillah writes "University of Bristol researchers have come up with a way to make touch screens more touchy-feely so to speak, using ultrasound waves to produce haptic feedback. You don't need to touch the screen even, as the UltraHaptics waves can be felt mid-air. Very Minority Report, but cooler."

The researchers built an ultrasonic transducer grid behind an acoustically transparent display. Using acoustic modeling of a volume above the screen, they can create multiple movable control points with varying properties. A Leap Motion controller was used to detect the hand movements.

0
Your rating: None

A video of a new virtual reality prototype that uses both Oculus Rift and Razer Hydra technology shows how users can use their mind to move around artificial environments.

Users require an Emotiv EPOC, a device capable of mapping certain thought patterns to actions, to read their brainwaves in order to interact with the prototype through thought. Naturally, this is still in its experimental phase; While virtual reality technology is becoming affordable, the Emotiv EPOC's capabilities are "still quite primitive," and not wholly user friendly, writes developer Chris Zaharia.

"With my experience in the education industry through my startup Zookal and keen interest in neuroscience, I had a thought around how these technologies could be used together to enhance education and at the same time, see how far can we go with using cognitive control in a virtual simulation," he writes.

Zaharia hopes to explore the possibility of using virtual reality for educational purposes ranging from engineering to biology. The demo offers a look at what is currently possible using virtual reality headsets, motion tracking through Razer Hydra and cognitive control in virtual simulations.

Tap for more stories

0
Your rating: None
Original author: 
Cesar Torres


Tumblr Creative Director Peter Vidani

Cesar Torres

New York City noise blares right outside Tumblr’s office in the Flat Iron District in Manhattan. Once inside, the headquarters hum with a quiet intensity. I am surrounded by four dogs that employees have brought to the workspace today. Apparently, there are even more dogs lurking somewhere behind the perpendicular rows of desks. What makes the whole thing even spookier is that these dogs don’t bark or growl. It’s like someone’s told them that there are developers and designers at work, and somehow they’ve taken the cue.

I’m here to see Tumblr’s Creative Director Peter Vidani who is going to pull the curtain back on the design process and user experience at Tumblr. And when I say design process, I don’t just mean color schemes or typefaces. I am here to see the process of interaction design: how the team at Tumblr comes up with ideas for the user interface on its website and its mobile apps. I want to find out how those ideas are shaped into a final product by their engineering team.

Back in May, Yahoo announced it was acquiring Tumblr for $1.1 billion. Yahoo indicated that Tumblr would continue to operate independently, though we will probably see a lot of content crossover between the millions of blog posts hosted by Tumblr and Yahoo’s search engine technology. It’s a little known fact that Yahoo has provided some useful tools for UX professionals and developers over the years through their Design Pattern Library, which shares some of Yahoo’s most successful and time-tested UI touches and interactions with Web developers. It’s probably too early to tell if Tumblr’s UI elements will filter back into these libraries. In the meantime, I talked to Vidani about how Tumblr UI features come to life.

Read 9 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Cyrus Farivar


Stephen Balaban is a co-founder of Lambda Labs, based in Palo Alto and San Francisco.

Cyrus Farivar

PALO ALTO, CA—Even while sitting in a café on University Avenue, one of Silicon Valley’s best-known commercial districts, it’s hard not to get noticed wearing Google Glass.

For more than an hour, I sat for lunch in late May 2013 with Stephen Balaban as he wore Google's new wearable tech. At least three people came by and gawked at the newfangled device, and Balaban even offered to let one woman try it on for herself—she turned out to be the wife of famed computer science professor Tony Ralston.

Balaban is the 23-year-old co-founder of Lambda Labs. It's a project he hopes will eventually become the “largest wearable computing software company in the world.” In Balaban's eyes, Lambda's recent foray into facial recognition only represents the beginning.

Read 31 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Thomas Joos

  

As a mobile UI or UX designer, you probably remember the launch of Apple’s first iPhone as if it was yesterday. Among other things, it introduced a completely touchscreen-centered interaction to a individual’s most private and personal device. It was a game-changer.

Today, kids grow up with touchscreen experiences like it’s the most natural thing. Parents are amazed by how fast their children understand how a tablet or smartphone works. This shows that touch and gesture interactions have a lot of potential to make mobile experiences easier and more fun to use.

Challenging Bars And Buttons

The introduction of “Human Interface Guidelines” and Apple’s App Review Board had a great impact on the quality of mobile applications. It helped a lot of designers and developers understand the core mobile UI elements and interactions. One of Apple’s popular suggestions, for instance, is to use UITabBar and UINavigationBar components — a guideline that many of us have followed, including me.

In fact, if you can honestly say that the first iPhone application you designed didn’t have any top or bottom bar elements, get in touch and send over a screenshot. I will buy you a beer and gladly tweet that you were ahead of your time.

My issue with the top and bottom bars is that they fill almost 20% of the screen. When designing for a tiny canvas, we should use every available pixel to focus on the content. In the end, that’s what really matters.

In this innovative industry, mobile designers need some time to explore how to design more creative and original interfaces. Add to that Apple’s frustrating rejection of apps that “think outside the box,” it is no surprise that experimental UI and UX designs such as Clear and Rise took a while to see the light of day. But they are here now. And while they might be quite extreme and focused on high-brow users and early adopters, they show us the great creative potential of gesture-driven interfaces.

Rise and Clear
Pulling to refresh feels very intuitive.

The Power Of Gesture-Driven Interfaces

For over two years now, I’ve been exploring the ways in which gestures add value to the user experience of a mobile application. The most important criterion for me is that these interactions feel very intuitive. This is why creative interactions such as Loren Brichter’s “Pull to Refresh” have become a standard in no time. Brichter’s interaction, introduced in Tweetie for iPhone, feels so intuitive that countless list-based applications suddenly adopted the gesture upon its appearance.

Removing UI Clutter

A great way to start designing a more gesture-driven interface is to use your main screen only as a viewport to the main content. Don’t feel obliged to make important navigation always visible on the main screen. Rather, consider giving it a place of its own. Speaking in terms of a virtual 2-D or 3-D environment, you could design the navigation somewhere next to, below, behind, in front of, above or hidden on top of the main view. A dragging or swiping gesture is a great way to lead the user to this UI element. It’s up to you to define and design the app.

What I like about Facebook and Gmail on iOS, for instance, is their implementation of a “side-swiping” menu. This trending UI concept is very easy to use. Users swipe the viewport to the right to reveal navigation elements. Not only does this make the app very content-focused, but accessing any section of the application takes only two to three touch interactions. A lot of apps do far worse than that!

Sideswipe Menu
Facebook and Gmail’s side-swiping menu

In addition to the UI navigation, your app probably also supports contextual interactions, too. Adding the same two or three buttons below every content item will certainly clutter the UI! While buttons might seem to be useful triggers, gestures have great potential to make interaction with content more intuitive and fun. Don’t hesitate to integrate simple gestures such as tapping, double-tapping and tapping-and-holding to trigger important interactions. Instagram supports a simple double-tap to perform one of its key features, liking and unliking a content item. I would not be surprised to see other apps integrate this shortcut in the near future.

An Interface That Fits

When designing an innovative mobile product, predicting user behavior can be very difficult. When we worked with Belgium’s Public Radio, my team really struggled with the UI balance between music visualization and real-time news. The sheer number of contextual scenarios and preferences made it very hard to come up with the perfect UI. So, we decided to integrate a simple dragging gesture to enable users to adjust the balance themselves.

Radio+
By dragging, users can balance music-related content and live news.

This gesture adds a creative contextual dimension to the application. The dragging gesture does not take the user from one section (news or music) to another. Rather, it enables the user to focus on the type of content they are most interested in, without missing out on the other.

Think in Terms of Time, Dimension and Animation

What action is triggered when the user taps an item? And how do you visualize that it has actually happened? How fast does a particular UI element animate into the viewport? Does it automatically go off-screen after five seconds of no interaction?

The rise of touch and gesture-driven devices dramatically changes the way we design interaction. Instead of thinking in terms of screens and pages, we are thinking more in terms of time, dimension and animation. You’ve probably noticed that fine-tuning user interactions and demonstrating them to colleagues and clients with static wireframe screenshots is not easy. You don’t fully see, understand and feel what will happen when you touch, hold, drag and swipe items.

Certain prototyping tools, including Pop and Invision, can help bring wireframes to life. They are very useful for testing an application’s flow and for pinpointing where and when a user might get stuck. Your application has a lot more going on than simple back-and-forth navigation, so you need to detect interface bugs and potential sources of confusion as soon as possible. You wouldn’t want your development team to point them out to you now, would you?

InvisionApp
Invision enables you to import and link your digital wireframes.

To be more innovative and experimental, get together with your client first and explain that a traditional wireframe is not the UX deliverable that they need. Show the value of interactive wireframes and encourage them to include this in the process. It might increase the timeline and budget, but if they are expecting you to go the extra mile, it shouldn’t be a problem.

I even offer to produce a conceptual interface video for my clients as well, because once they’ve worked with the interactive wireframes and sorted out the details, my clients will often need something sexier to present to their internal stakeholders.

The Learning Curve

When designing gesture-based interactions, be aware that every time you remove UI clutter, the application’s learning curve goes up. Without visual cues, users could get confused about how to interact with the application. A bit of exploration is no problem, but users should know where to begin. Many apps show a UI walkthrough when first launched, and I agree with Max Rudberg that walkthroughs should explain only the most important interactions. Don’t explain everything at once. If it’s too explicit and long, users will skip it.

Why not challenge yourself and gradually introduce creative UI hints as the user uses the application? This pattern is often referred to as progressive disclosure and is a great way to show only the information that is relevant to the user’s current activity. YouTube’s Capture application, for instance, tells the user to rotate the device to landscape orientation just as the user is about to open the camera for the first time.

Visual Hints
Fight the learning curve with a UI walkthrough and/or visual hints.

Adding visual cues to the UI is not the only option. In the Sparrow app, the search bar appears for a few seconds, before animating upwards and going off screen, a subtle way to say that it’s waiting to be pulled down.

Stop Talking, Start Making

The iPhone ushered in a revolution in interactive communication. Only five years later, touchscreen devices are all around us, and interaction designers are redefining the ways people use digital content.

We need to explore and understand the potential of touch and gesture-based interfaces and start thinking more in terms of time, dimension and animation. As demonstrated by several innovative applications, gestures are a great way to make an app more content-focused, original and fun. And many gesture-based interactions that seem too experimental at first come to be seen as very intuitive.

For a complete overview of the opportunities for gestures on all major mobile platforms, check out Luke Wroblewski’s “Touch Gesture Reference Overview.” I hope you’re inspired to explore gesture-based interaction and intensify your adventures in mobile interfaces. Don’t be afraid to go the extra mile. With interactive wireframes, you can iterate your way to the best possible experience. So, let’s stop talking and start making.

(al)

© Thomas Joos for Smashing Magazine, 2013.

0
Your rating: None
Original author: 
Andrew Cunningham

Aurich Lawson / Thinkstock

Welcome back to our three-part series on touchscreen technology. Last time, Florence Ion walked you through the technology's past, from the invention of the first touchscreens in the 1960s all the way up through the mid-2000s. During this period, different versions of the technology appeared in everything from PCs to early cell phones to personal digital assistants like Apple's Newton and the Palm Pilot. But all of these gadgets proved to be little more than a tease, a prelude to the main event. In this second part in our series, we'll be talking about touchscreens in the here-and-now.

When you think about touchscreens today, you probably think about smartphones and tablets, and for good reason. The 2007 introduction of the iPhone kicked off a transformation that turned a couple of niche products—smartphones and tablets—into billion-dollar industries. The current fierce competition from software like Android and Windows Phone (as well as hardware makers like Samsung and a host of others) means that new products are being introduced at a frantic pace.

The screens themselves are just one of the driving forces that makes these devices possible (and successful). Ever-smaller, ever-faster chips allow a phone to do things only a heavy-duty desktop could do just a decade or so ago, something we've discussed in detail elsewhere. The software that powers these devices is more important, though. Where older tablets and PDAs required a stylus or interaction with a cramped physical keyboard or trackball to use, mobile software has adapted to be better suited to humans' native pointing device—the larger, clumsier, but much more convenient finger.

Read 22 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Scott Gilbertson

Hybrids. Image: Screenshot/Webmonkey.

The advent of hybrid laptops that double as tablets or offer some sort of touch input has greatly complicated the life of web developers.

A big part of developing for today’s myriad screens is knowing when to adjust the interface, based not just on screen size, but other details like input device. Fingers are far less precise than a mouse, which means bigger buttons, form fields and other input areas.

But with hybrid devices like touch screen Windows 8 laptops or dockable Android tablets with keyboards, how do you know whether the user is browsing with a mouse or a finger?

Over on the Mozilla Hacks blog Patrick Lauke tackles that question in an article on detecting touch-capable devices. Lauke covers the relatively simple case of touch-only, like iOS devices, before diving into the far more complex problem of hybrid devices.

Lauke’s answer? If developing for the web hasn’t already taught you this lesson, perhaps hybrid devices will — learn to live with uncertainty and accept that you can’t control everything.

What’s the solution to this new conundrum of touch-capable devices that may also have other input methods? While some developers have started to look at complementing a touch feature detection with additional user agent sniffing, I believe that the answer – as in so many other cases in web development – is to accept that we can’t fully detect or control how our users will interact with our web sites and applications, and to be input-agnostic. Instead of making assumptions, our code should cater for all eventualities.

While learning to live with uncertainty and providing interfaces that work with any input sounds nice in theory, developers are bound to want something a bit more concrete. There’s some hope on the horizon. Microsoft has proposed the Pointer Events spec (and created a build of Webkit that supports it). And the CSS Media Queries Level 4 spec will offer a pointer query to see what sort of input device is being used (mouse, finger, stylus etc).

Unfortunately, neither Pointer Events nor Media Queries Level 4 are supported in today’s browsers. Eventually there probably will be some way to easily detect and know for certain which input device is being used, but for the time being you’re going to have to live with some level of uncertainty. Be sure to read through Lauke’s post for more details and some sample code.

0
Your rating: None
Original author: 
Florence Ion

Aurich Lawson / Thinkstock

It's hard to believe that just a few decades ago, touchscreen technology could only be found in science fiction books and film. These days, it's almost unfathomable how we once got through our daily tasks without a trusty tablet or smartphone nearby, but it doesn't stop there. Touchscreens really are everywhere. Homes, cars, restaurants, stores, planes, wherever—they fill our lives in spaces public and private.

It took generations and several major technological advancements for touchscreens to achieve this kind of presence. Although the underlying technology behind touchscreens can be traced back to the 1940s, there's plenty of evidence that suggests touchscreens weren't feasible until at least 1965. Popular science fiction television shows like Star Trek didn't even refer to the technology until Star Trek: The Next Generation debuted in 1987, almost two decades after touchscreen technology was even deemed possible. But their inclusion in the series paralleled the advancements in the technology world, and by the late 1980s, touchscreens finally appeared to be realistic enough that consumers could actually employ the technology into their own homes. 

This article is the first of a three-part series on touchscreen technology's journey to fact from fiction. The first three decades of touch are important to reflect upon in order to really appreciate the multitouch technology we're so used to having today. Today, we'll look at when these technologies first arose and who introduced them, plus we'll discuss several other pioneers who played a big role in advancing touch. Future entries in this series will study how the changes in touch displays led to essential devices for our lives today and where the technology might take us in the future. But first, let's put finger to screen and travel to the 1960s.

Read 51 remaining paragraphs | Comments

0
Your rating: None