Skip navigation
Help

Virtual reality

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

933924 512868958750541 697525497 n First official Brain Computer Interface journal coming in January 2014

At last, there will be a printed journal where BCI researchers can submit their work to. It is called the Brain-Computer Interfaces published by Taylor & Francis, an international company originating in the UK that publishes books and academic journals. The BCI journal was announced and its importance was discussed at the recent BCI meeting at Pacific Grove, California.

The new BCI journal will have four issues a year. The first issue is planned to be published in January 2014.

The journal will focus on the following areas:

  • Development and user-centered evaluation of engineered BCI applications with emphasis on the analysis of what aspects are crucial to making the system work, in addition to straightforward assessment of its success.
  • Scientific investigation of patterns of brain activity that can, or show promise to be able to, be used to drive BCI applications.
  • Development and evaluation of signal processing methods that extract signal features, classify them, and otherwise translate brain signals into device commands.
  • New invasive and noninvasive methods to monitor and acquire brain signals.
  • Applications of BCI technology to understand human perception, affect, action, and various aspects of cognition and behavior.
  • Ethical and sociological implications of brain-computer interfacing applications.
  • Human factors and human-computer interaction (HCI) concerns in the design, development and evaluation of BCIs.
  • Clinical trials and individual case studies of the experimental therapeutic application of BCIs.
  • Behavioral studies of BCI use in humans and animals.
  • Studies of neurosurgical techniques relevant to BCIs.
  • Proposal, review and analysis of standards for BCI hardware, software and protocols.

The new printed journal is clearly a great opportunity for the whole BCI community to get together and have a more organized publication standard. The contribution towards this journal will help the BCI community to find other researchers for collaboration more easily.

If you would like to have your paper for consideration, contact the co-editors Chang Nam and Jeremy Hill.

0
Your rating: None

Author, researcher, and psychedelic pioneer Timothy Leary could have added another title to his name: creator of an amazing, incredibly weird take on William Gibson's Neuromancer showcased by Wired. Since acquiring Leary's archives in mid-2011, the New York Public Library has been uncovering and publishing details about Leary's work, including fragments of Leary's plans for scrapped computer games. In 1985, he helped develop and publish Mind Mirror, a psychoanalytic game that let players build and role-play personalities — Electronic Arts, which put out the title, reportedly sold 65,000 copies in the two years after release. But according to material that the library released to researchers last week, he also had far more ambitious plans.

0
Your rating: None

A video of a new virtual reality prototype that uses both Oculus Rift and Razer Hydra technology shows how users can use their mind to move around artificial environments.

Users require an Emotiv EPOC, a device capable of mapping certain thought patterns to actions, to read their brainwaves in order to interact with the prototype through thought. Naturally, this is still in its experimental phase; While virtual reality technology is becoming affordable, the Emotiv EPOC's capabilities are "still quite primitive," and not wholly user friendly, writes developer Chris Zaharia.

"With my experience in the education industry through my startup Zookal and keen interest in neuroscience, I had a thought around how these technologies could be used together to enhance education and at the same time, see how far can we go with using cognitive control in a virtual simulation," he writes.

Zaharia hopes to explore the possibility of using virtual reality for educational purposes ranging from engineering to biology. The demo offers a look at what is currently possible using virtual reality headsets, motion tracking through Razer Hydra and cognitive control in virtual simulations.

Tap for more stories

0
Your rating: None

Resogun, the latest colorful shooter from Finnish indie developer Housemarque, is an extravagant, heavily detailed demonstration of the PlayStation 4's graphical horsepower. But look under the hood and you'll find an old shoot-'em-up that isn't shy of aping its inspirations.

If you remember losing quarters to the local arcade, you will recognize Resogun's Defender-like structure. As a spaceship, you protect humans from waves of enemies that encroach from both the left and right side of the screen. Resogun adds a twist: you must first "unlock" the humans by exterminating a special set of enemies before collecting the living cargo and delivering it to one of two goals.

It's just enough complexity to make the Defender homage feel new. In frantic moments, collecting humans off the ground and tossing them into their safety zone felt like delivering a slam dunk — not the first thing I associate with the retro shooter genre, but a welcome addition nonetheless.

The other inspiration is a lesser-known sub-genre called bullet hell, niche shooters in which hundreds, sometimes thousands of projectiles that inflict instant death gradually cover the screen. To survive, the player must memorize the intricate bullet patterns and carefully thread a craft through holes sometimes only a couple pixels wide.

0
Your rating: None
Original author: 
Soulskill

vinces99 writes "Small electrodes placed on or inside the brain allow patients to interact with computers or control robotic limbs simply by thinking about how to execute those actions. This technology could improve communication and daily life for a person who is paralyzed or has lost the ability to speak from a stroke or neurodegenerative disease. Now researchers have demonstrated that when humans use this brain-computer interface, the brain behaves much like it does when completing simple motor skills such as kicking a ball, typing or waving a hand (abstract). That means learning to control a robotic arm or a prosthetic limb could become second nature for people who are paralyzed."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None
Original author: 
Thomas Joos

  

As a mobile UI or UX designer, you probably remember the launch of Apple’s first iPhone as if it was yesterday. Among other things, it introduced a completely touchscreen-centered interaction to a individual’s most private and personal device. It was a game-changer.

Today, kids grow up with touchscreen experiences like it’s the most natural thing. Parents are amazed by how fast their children understand how a tablet or smartphone works. This shows that touch and gesture interactions have a lot of potential to make mobile experiences easier and more fun to use.

Challenging Bars And Buttons

The introduction of “Human Interface Guidelines” and Apple’s App Review Board had a great impact on the quality of mobile applications. It helped a lot of designers and developers understand the core mobile UI elements and interactions. One of Apple’s popular suggestions, for instance, is to use UITabBar and UINavigationBar components — a guideline that many of us have followed, including me.

In fact, if you can honestly say that the first iPhone application you designed didn’t have any top or bottom bar elements, get in touch and send over a screenshot. I will buy you a beer and gladly tweet that you were ahead of your time.

My issue with the top and bottom bars is that they fill almost 20% of the screen. When designing for a tiny canvas, we should use every available pixel to focus on the content. In the end, that’s what really matters.

In this innovative industry, mobile designers need some time to explore how to design more creative and original interfaces. Add to that Apple’s frustrating rejection of apps that “think outside the box,” it is no surprise that experimental UI and UX designs such as Clear and Rise took a while to see the light of day. But they are here now. And while they might be quite extreme and focused on high-brow users and early adopters, they show us the great creative potential of gesture-driven interfaces.

Rise and Clear
Pulling to refresh feels very intuitive.

The Power Of Gesture-Driven Interfaces

For over two years now, I’ve been exploring the ways in which gestures add value to the user experience of a mobile application. The most important criterion for me is that these interactions feel very intuitive. This is why creative interactions such as Loren Brichter’s “Pull to Refresh” have become a standard in no time. Brichter’s interaction, introduced in Tweetie for iPhone, feels so intuitive that countless list-based applications suddenly adopted the gesture upon its appearance.

Removing UI Clutter

A great way to start designing a more gesture-driven interface is to use your main screen only as a viewport to the main content. Don’t feel obliged to make important navigation always visible on the main screen. Rather, consider giving it a place of its own. Speaking in terms of a virtual 2-D or 3-D environment, you could design the navigation somewhere next to, below, behind, in front of, above or hidden on top of the main view. A dragging or swiping gesture is a great way to lead the user to this UI element. It’s up to you to define and design the app.

What I like about Facebook and Gmail on iOS, for instance, is their implementation of a “side-swiping” menu. This trending UI concept is very easy to use. Users swipe the viewport to the right to reveal navigation elements. Not only does this make the app very content-focused, but accessing any section of the application takes only two to three touch interactions. A lot of apps do far worse than that!

Sideswipe Menu
Facebook and Gmail’s side-swiping menu

In addition to the UI navigation, your app probably also supports contextual interactions, too. Adding the same two or three buttons below every content item will certainly clutter the UI! While buttons might seem to be useful triggers, gestures have great potential to make interaction with content more intuitive and fun. Don’t hesitate to integrate simple gestures such as tapping, double-tapping and tapping-and-holding to trigger important interactions. Instagram supports a simple double-tap to perform one of its key features, liking and unliking a content item. I would not be surprised to see other apps integrate this shortcut in the near future.

An Interface That Fits

When designing an innovative mobile product, predicting user behavior can be very difficult. When we worked with Belgium’s Public Radio, my team really struggled with the UI balance between music visualization and real-time news. The sheer number of contextual scenarios and preferences made it very hard to come up with the perfect UI. So, we decided to integrate a simple dragging gesture to enable users to adjust the balance themselves.

Radio+
By dragging, users can balance music-related content and live news.

This gesture adds a creative contextual dimension to the application. The dragging gesture does not take the user from one section (news or music) to another. Rather, it enables the user to focus on the type of content they are most interested in, without missing out on the other.

Think in Terms of Time, Dimension and Animation

What action is triggered when the user taps an item? And how do you visualize that it has actually happened? How fast does a particular UI element animate into the viewport? Does it automatically go off-screen after five seconds of no interaction?

The rise of touch and gesture-driven devices dramatically changes the way we design interaction. Instead of thinking in terms of screens and pages, we are thinking more in terms of time, dimension and animation. You’ve probably noticed that fine-tuning user interactions and demonstrating them to colleagues and clients with static wireframe screenshots is not easy. You don’t fully see, understand and feel what will happen when you touch, hold, drag and swipe items.

Certain prototyping tools, including Pop and Invision, can help bring wireframes to life. They are very useful for testing an application’s flow and for pinpointing where and when a user might get stuck. Your application has a lot more going on than simple back-and-forth navigation, so you need to detect interface bugs and potential sources of confusion as soon as possible. You wouldn’t want your development team to point them out to you now, would you?

InvisionApp
Invision enables you to import and link your digital wireframes.

To be more innovative and experimental, get together with your client first and explain that a traditional wireframe is not the UX deliverable that they need. Show the value of interactive wireframes and encourage them to include this in the process. It might increase the timeline and budget, but if they are expecting you to go the extra mile, it shouldn’t be a problem.

I even offer to produce a conceptual interface video for my clients as well, because once they’ve worked with the interactive wireframes and sorted out the details, my clients will often need something sexier to present to their internal stakeholders.

The Learning Curve

When designing gesture-based interactions, be aware that every time you remove UI clutter, the application’s learning curve goes up. Without visual cues, users could get confused about how to interact with the application. A bit of exploration is no problem, but users should know where to begin. Many apps show a UI walkthrough when first launched, and I agree with Max Rudberg that walkthroughs should explain only the most important interactions. Don’t explain everything at once. If it’s too explicit and long, users will skip it.

Why not challenge yourself and gradually introduce creative UI hints as the user uses the application? This pattern is often referred to as progressive disclosure and is a great way to show only the information that is relevant to the user’s current activity. YouTube’s Capture application, for instance, tells the user to rotate the device to landscape orientation just as the user is about to open the camera for the first time.

Visual Hints
Fight the learning curve with a UI walkthrough and/or visual hints.

Adding visual cues to the UI is not the only option. In the Sparrow app, the search bar appears for a few seconds, before animating upwards and going off screen, a subtle way to say that it’s waiting to be pulled down.

Stop Talking, Start Making

The iPhone ushered in a revolution in interactive communication. Only five years later, touchscreen devices are all around us, and interaction designers are redefining the ways people use digital content.

We need to explore and understand the potential of touch and gesture-based interfaces and start thinking more in terms of time, dimension and animation. As demonstrated by several innovative applications, gestures are a great way to make an app more content-focused, original and fun. And many gesture-based interactions that seem too experimental at first come to be seen as very intuitive.

For a complete overview of the opportunities for gestures on all major mobile platforms, check out Luke Wroblewski’s “Touch Gesture Reference Overview.” I hope you’re inspired to explore gesture-based interaction and intensify your adventures in mobile interfaces. Don’t be afraid to go the extra mile. With interactive wireframes, you can iterate your way to the best possible experience. So, let’s stop talking and start making.

(al)

© Thomas Joos for Smashing Magazine, 2013.

0
Your rating: None
Original author: 
Sean Hollister

2013-05-17_07-08-36-1020_large

Three months ago, celebrated video game publisher Valve did something completely out of character: it fired up to 25 workers, in what one employee dubbed the "great cleansing." At the time, co-founder Gabe Newell quickly reassured gamers that the company wouldn't be canceling any projects, but it just so happens that one project managed to get away.

Valve was secretly working on a pair of augmented reality glasses... and those glasses are still being built by two Valve employees who lost their jobs that day.

"This is what I'm going to build come hell or high water."

Former Valve hardware engineer Jeri Ellsworth and programmer Rick Johnson spent over a year working on the project at Valve, and have been putting in six days a week, 16+...

Continue reading…

0
Your rating: None
Original author: 
Unknown Lamer

MojoKid writes "There's no doubt that gaming on the Web has improved dramatically in recent years, but Mozilla believes it has developed new technology that will deliver a big leap in what browser-based gaming can become. The company developed a highly-optimized version of Javascript that's designed to 'supercharge' a game's code to deliver near-native performance. And now that innovation has enabled Mozilla to bring Epic's Unreal Engine 3 to the browser. As a sort of proof of concept, Mozilla debuted this BananaBread game demo that was built using WebGL, Emscripten, and the new JavaScript version called 'asm.js.' Mozilla says that it's working with the likes of EA, Disney, and ZeptoLab to optimize games for the mobile Web, as well." Emscripten was previously used to port Doom to the browser.

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None