Skip navigation
Help

security systems

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Lock-stock-cards_large

Australia's biggest casino fell victim to a USD$33 million scam after its own surveillance systems were used against it to supply a high-roller with information on how he should play. According to The Herald Sun, thieves were granted unauthorised access to the casino's security systems "several weeks ago" by a staff member who has since been sacked.

It's believed that the high roller — who was reportedly staying in an "opulent villa" reserved for VIP guests — was exposed over the course of eight hands of cards, played at a table in an exclusive area of the casino. The gambler, along with his family, found himself removed from his VIP accommodation in the middle of the night, saw his betting license revoked and was (unsurprisingly)...

Continue reading…

0
Your rating: None

What you’re watching in the video above doesn’t involve cameras or motion sensors. It’s the kind of brain-to-machine, body-to-interaction interface most of us associate with science fiction. And while the technology has made the occasional appearance in unusual, niche commercial applications, it’s poised now to blow wide open for music – open as in free and open source.

Erasing the boundary between contracting a muscle in the bio-physical realm and producing electronic sound in the virtual realm is what Xth Sense is all about. Capturing biological data is all the rage these days, seen primarily in commercial form in products for fitness, but a growing trend in how we might make our computers accessories for our bodies as well as our minds. (Or is that the other way around?) This goes one step further: the biological becomes the interface.

Artist and teacher Marco Donnarumma took first prize with this project in the prestigious Guthman Musical Instrument Competition at Georgia Tech in the US. Born in Italy and based in Edinburgh, Scotland, Marco explains to us how the project works and why he took it up. It should whet your appetite as we await an open release for other musicians and tinkerers to try next month. (By the way, if you’re in the New York City area, Marco will be traveling to the US – a perfect chance to collaborate, meet, or set up a performance or workshop; shout if you’re interested.)

Hypo Chrysos live at Trendelenburg AV Festival, Gijon, Spain, December 2011.

CDM: Tell us a bit about yourself. You’re working across disciplines, so how do you describe what you do?

Marco: People would call me a media and sound artist. I would say what I love is performing, but at the same time, I’m really curious about things. So, most of the time I end up coding my software, developing devices and now even designing wearable tech. Since some years now I work only with free and open source tools and this is naturally reflected in what I do and how I do it. (Or at least I hope so!)

I just got back from Atlanta, US, where the Xth Sense (XS) was awarded the first prize in the Margaret Guthman New Musical Instrument, as what they named the “world’s most innovative new musical instrument.” [See announcement from Georgia Tech.]

It’s an encouraging achievement and I’m still buzzing, specially because the other 20 finalists all presented great ideas. Overall, it has been an inspiring event, and I warmly recommend musicians and inventors to participate next year. My final performance:

Make sure to use a proper soundsystem [when watching the videos]; most of the sound spectrum lives between 20-60Hz.

Music for Flesh II live at Georgia Tech Center for Music Technology, Atlanta, USA, February 2012. Photo courtesy the artist.

You’re clenching your muscles, and something is happening – can you tell us how this XS system works?

Marco: My definition of it goes like “a biophysical framework for musical performance and responsive milieux.” In other words, it is a technology that extends some intrinsic sonic capabilities of the human body through a computer system that senses the physical energy released by muscle tissues.

I started developing it in September 2011 at the SLE, the Sound Lab at the Edinburgh University, and got it ready to go in March 2011. It has evolved a lot in many ways ever since.

The Xth Sense wearable biosensors by Chris Scott.

The XS is composed of custom biophysical sensors and a custom software.

At the onset of a muscle contraction, energy is released in the form of acoustic sound. This is to say, similarly to the chord of a violin, each muscle tissue vibrates at specific frequencies and produces a sound (called Mechanomyographic signal, or MMG). It is not audible to human ear, but it is indeed a soundwave that resonates from the body.

The MMG data is quite different from locative data you can gather with accelerometers and the like; whereas the latter reports the consequence of a movement, the former directly represents the energy impulse that causes that movement. If you add to this a high sampling rate (up to 192.000Hz if your sound card supports it) and very low latency (measured at 2.3ms) you can see why the responsiveness of the XS can be highly expressive.

The XS sensors capture the low-frequency acoustic vibrations produced by a performer’s body and send them to the computer as an audio input. The XS software analyzes the MMG in order to extract the characteristics of the movements, such as dynamics of a single gesture, maximum amplitude of a series of gestures in time, etc.

These are fed to some algorithms that produce the control data (12 discrete and continuous variables for each sensor) to drive the sound processing of the original MMG.

Eventually, the system plays back both the raw muscle sounds (slightly transposed to become better audible, say about 50/60Hz) and the processed muscle sounds.

I like to term this model of performance biophysical music, in contrast with biomusic, which is based on the electrical impulses of muscles and brainwaves.

By differently contracting muscles (which has a different meaning than simply “moving”) one can create and sculpt musical material in real-time. One can design a specific gesture that produces a specific sonic result, what I call a sound-gesture. These can be composed in a score, or improvised, or also improvised on a more or less fixed score.

The XS software has also a sensing sequencing time-line: with a little machine learning (just implemented few days ago) the system understands when you’re still or moving, when you’re being fast or slow, and can use this data to change global parameters, functions or to play with the timing of events. For example, the computer can track your behaviour in time and wait for you to stop whatever you’re doing before switching to a different set of funcions.

The XS sensors are wearable devices, so the computer can be forgotten in a corner of the stage; the performer has complete freedom on stage, and the audience is not exposed to the technology, but rather to the expressivity of the performance. What I like most about the XS is that is a flexible and multi-modal instrument. One can use it to:

  • capture and playback acoustic sounds of the body,
  • control audio and video software on the computer, or
  • capture body sounds and control them through the computer simultaneously.

This opens up an interesting perspective on the applications of the XS to musical performance, dance, theatre and interaction design. The XS can also be used only as a gestural controller, although I never use it exclusively this way. We have thousands of controllers out there.

Besides, I wanted the XS to be accessible, usable, hackable and redistributable. Unfortunately, the commercialized product dealing with biosignals are mostly not cheap and — most importantly — closed to the community. See the Emotiv products (US$299 Neuro Headset, not for developers), or the BioFlex (US$392.73). One could argue that the technology is complex, and that’s why those devices are expensive and closed. This could make sense, but who says we can’t produce new technologies that openly offer similar or new capabilities at a much lower cost?

The formal recognition of the XS as an innovative musical instrument and the growing effort of the community in producing DIY EEG, ECG and Biohacking devices are a clear statement in this sense. I find this movement encouraging and possibly indispensable nowadays, as the information technology industry is increasingly deploying biometric data for adverts and security systems. For the geeky ones there are some examples in a recent paper of mine for the 2012 CHI workshop on Liveness.

For those reasons, the XS hardware design has been implemented in the simplest form I could think of; the parts needed to build an XS sensor cost about £5 altogether and the schematics looks purposely dumb. The sensors can be worn on any parts of the body. I worked with dancers who wore them on the neck and legs, a colleague stuck one to his throat to capture the resonances of his voice, I use them on the arms or to capture the pumping of the blood flow and the heart rate.

The XS software is free, based in Pd, aka Pure Data, and comes with a proper, user-friendly Graphical User Interface (GUI) and its own library, which includes over one hundred objects with help files. It is developed on Linux, and it’s Mac OS X compatible; I’m not developing for Windows, but some people got it working there too. A big thumb up goes to our wonderful Pd Community; if I had not been reading and learning through the Pd mailing list for the past 5 years I would have never been able to code this stuff.

The Xth Sense software Graphical User Interface. Built in Pd.

The public release of the project will be in April. The source code, schematics, tutorials, will be freely available online, and there will be DIY kits for the lazier ones. I’m already collecting orders for the first batch of DIY kits, so if anybody is interested please, get in touch:
http://marcodonnarumma.com/contact

I do hope to see the system hacked and extended, especially because the sensors were initially built with the support of the folks at the Dorkbot ALBA/Edinburgh Hacklab. I’m also grateful to the community around me, friends, musicians, artists devs and researchers for contributing to the success of the project by giving feedback, inspiring and sharing (you know who you are!).

Thanks, Marco! We’ll be watching!

More on the Work

http://marcodonnarumma.com/works/xth-sense/
http://marcodonnarumma.com/works/music-for-flesh-ii/
http://res.marcodonnarumma.com/blog/

And the Edinburgh hack lab:
http://edinburghhacklab.com/

Biological Interfaces for Music

There isn’t space here to recount the various efforts to do this; Marco’s design to me is notable mainly in its simplicity and – hopefully, as we’ll see next month – accessibility to other users. I’ve seen a number of brain interfaces just in the past year, but perhaps someone with more experience on the topic would like to share; that could be a topic for another post.

Entirely unrelated to music, but here’s the oddest demo video I’ve seen of human-computer interfacing, which I happened to see today. (Well, unrelated to music until you come up with something this crazy. Go! I want to see your band playing with interactive animal ears.)

Scientific American’s blog tackles the question this week (bonus 80s sci-fi movie reference):
Brain-Machine Interfaces in Fact and Fiction

I’ve used up my Lawnmower Man reference quota for the month, so tune in in April.

Tweet

2
Your rating: None Average: 2 (1 vote)

  

Have you ever looked at a bizarre building design and wondered, “What were the architects thinking?” Or have you simply felt frustrated by a building that made you uncomfortable, or felt anger when a beautiful old building was razed and replaced with a contemporary eyesore? You might be forgiven for thinking “these architects must be blind!” New research shows that in a real sense, you might actually be right.

That’s Michael Mehaffy and Nikos A. Salingaros describing a phenomenon we’re all familiar with, in their article “Architectural Myopia: Designing for Industry, Not People.” As I read the article, I became increasingly uncomfortable as I realized that the whole thing might as well have been written about Web design (and about our responses to the designs of our peers). How often do we look at a website or app and remark to ourselves (and on Twitter) that “these designers must have been blind!” Sometimes we’re just being whiney about minute details (as we should be), but other times we do have a point: “What were they thinking?”

Longaberger-building-in-Newark
Longaberger Home Office, Newark, Ohio. Image source.

In this article, we’ll discuss “designer myopia”: the all-too-common phenomenon whereby, despite our best intentions, we sometimes design with a nearsightedness that results in websites and applications that please ourselves and impress our peers but don’t meet user and business goals. With Mehaffy and Salingaros’s article as our guide, we’ll investigate the causes of designer myopia, and then explore some solutions to help us take the focus off ourselves and back on the people we’re designing for.

The Causes Of Designer Myopia

If the language in the opening paragraph sounds familiar, it’s because most of us privately and publicly mutter “What were they thinking?” almost every day as we move across the Web. We analyze the new Twitter app; we take it upon ourselves to redesign popular websites — and then we wonder if we should even be doing that. One thing is clear, though: we’re good at pointing out designer myopia in our peers.

But what are the causes of this lack of imagination and foresight in our work? Shouldn’t we be smart enough to avoid the obvious traps of designing too much from our own viewpoints and not taking the wider user context in mind? Well, it turns out that we quite literally see the world very differently than others. Again, from “Architectural Myopia”:

Instead of a contextual world of harmonious geometric relationships and connectedness, architects tend to see a world of objects set apart from their contexts, with distinctive, attention-getting qualities.

In other words, we see typography and rounded corners where normal people just see websites to get stuff done on. We see individual shapes and colors and layout where our users just see a page on the Internet. Put another way, we’re unable to see the forest for the trees.

How did we get here? Notice the striking resemblance to Web design as Mehaffy and Salingaros describe the slippery slope that has led to this state in architecture:

With the coming of the industrial revolution, and its emphasis on interchangeable parts, the traditional conception of architecture that was adaptive to context began to change. A building became an interchangeable industrial design product, conveying an image, and it mattered a great deal how attention-getting that image was. The building itself became a kind of advertisement for the client company and for the architect (and in the case of residences, for the homeowner seeking a status symbol). The context was at best a side issue, and at worst a distraction, from the visual excitement generated by the object.

This is why we often see designs that seem to be built for Dribbble, portfolios and “7 Jaw-Dropping Minimalist Designs” blog posts, instead of being “adaptive to context” based on user needs. We have gained much from the “industrialization” of design through UI component libraries and established patterns, but we’ve also lost some of the unique context-based thinking that should go into solving every design problem.

Jon Tan touches on this in “Taxidermista,” his excellent essay on design galleries in the first issue of The Manual:

Galleries do not bear sole responsibility for how design is commissioned. However, they do encourage clients and designers to value style more than process. They do promote transient fashion over fit and make trends of movements such as minimalism or styles like grunge or the ubiquitous Apple-inspired aesthetic.

The result of all of this is that we sometimes end up designing primarily for ourselves and our close-knit community. Jeffrey Goldberg reminds us that this is true for much of the technology industry in “Convenience Is Security”:

Security systems (well, the good ones anyway) are designed by people who fully understand the reasons behind the rules. The problem is that they try to design things for people like themselves — people who thoroughly understand the reasons. Thus we are left with products that only work well for people who have a deep understanding of the system and its components.

And so we end up with a proliferation of beautiful websites and applications that only we find usable.

Dilbert Cartoon
We all follow some rules of thumb without understanding the reasons behind them.

We can’t talk about designing primarily for the community without bringing up the awkward point that we often do it deliberately. We thrive on the social validation that comes from positive Twitter comments, being featured in design galleries and getting a gazillion Dribbble likes. And let’s be honest: that validation also helps us get more clients. This is just part of human nature, and not necessarily a bad thing. But it can be a bad thing; so at the very least, we need to call it out as another possible cause for designer myopia so that we can be conscious of it.

The Manual
The Manual brings clarity to the ‘why’ of Web design, and much more.

Oh, and while we’re at it, let’s ask the obvious next question. Why are we so good at noticing when others fall into the myopia trap but fail to catch ourselves when we do it? In “Why We’re Better at Predicting Other People’s Behaviour Than Our Own,” Christian Jarrett reports on some recent research that might provide the answer:

[When] predicting our own behaviour, we fail to take the influence of the situation into account. By contrast, when predicting the behaviour of others, we correctly factor in the influence of the circumstances. This means that we’re instinctually good social psychologists but at the same time we’re poor self-psychologists.

In other words, we’re much better at taking the entire context into consideration when looking at other people’s designs than when we are creating our own. Scary stuff.

So, if designer myopia is indeed a pervasive problem (and if we are not good at recognizing it in ourselves), what do we do to fix it? I’d like to propose some established but often-ignored techniques to get us out of this dilemma.

1. Conduct Observational User Research In Context

The first thing that Mehaffy and Salingaros suggest in their article to overcome myopia is this:

First of all, re-integrate the needs of human beings, their sensory experience of the world, and their participation into the process of designing buildings. Leading design theory today advocates “co-design,” in which the users become part of the design team, and guide it through the evolutionary adaptations to make a more successful, optimal kind of design. Architects spend more time talking to their users, sharing their perception and understanding their needs: not just the architect’s selfish need for artistic self-expression, or worse, his/her need to impress other architects and elite connoisseur-critics.

Note that this is not just about asking users what they think. It’s about making users part of the design process in a helpful, methodologically sound manner. To accomplish this, we can look to anthropology to play a substantial role in the design of products and experiences. Ethnography (often called contextual inquiry in the user-centered design world) is the single best way to uncover unmet needs and make sure we are solving the right problems for our users.

In “Using Ethnography to Improve User Experience,” Bonny Colville Hyde describes ethnography as follows:

Ethnographers observe, participate and interview groups of people in their natural environments and devise theories based on analysis of their observations and experiences. This contrasts with other forms of research that generally set out to prove or disprove a theory.

That’s the core of it: we do ethnography to learn, not to confirm our beliefs. By using this method to understand the culture and real needs of our users, we’re able to design better user-centered solutions than would be possible if we relied only on existing UI patterns and some usability testing.

Leaving the office and spending time observing users in their own environments is the best way to understand how a product is really being used in the wild. It’s the most efficient way to get out of your own head.

2. Design To Blend In

Let’s stick with the architecture theme for a moment. The concept of “paving the cowpaths” is another effective way to look beyond ourselves and to design websites and applications that form part of our users’ landscapes (rather than break their mental models). In “Architecture, Urbanism, Design and Behaviour: A Brief Review,” Dan Lockton writes:

One emergent behavior-related concept arising from architecture and planning which has also found application in human-computer interaction is the idea of desire lines, desire paths or cowpaths. The usual current use of the term […] is to describe paths worn by pedestrians across spaces such as parks, between buildings or to avoid obstacles […] and which become self-reinforcing as subsequent generations of pedestrians follow what becomes an obvious path. […]

[T]here is potential for observing the formation of desire lines and then “codifying” them in order to provide paths that users actually need, rather than what is assumed they will need. In human-computer interaction, this principle has become known as “Pave the cowpaths”.

This is such an interesting perspective on user-centered design. By starting a design project with an explicit goal to “pave the cowpaths,” we will always be pulled back into a frame of mind that asks how the design can better blend in with our users’ lives and with what they already do online. The same questions will keep haunting us, and rightly so:

  • Do we have analytics to back up this behavior?
  • Are we sure this is what users naturally do on the website?
  • We know that most users click on this navigation element to get things done. How do we make that behavior easier for them?

In the same paragraph in “Taxidermista,” Jon Tan also calls for us to step back and ask questions like these before starting to design:

The answers to a project’s questions may have something to do with fashion, but not often. Good design does not have a shelf life. The best web designers gently disregard issues of style at the start. They rewind their clients back to asking the right questions, so they can rewrite the brief and understand the objectives before they propose solutions.

By asking the right questions, we focus our effort on fitting into the ways that users move on the Web, as opposed to bending them to our will.

3. Triangulate Results

The two recommendations above are very specific, so I’d also like to make a more general point. There are, of course, several other user-research methodologies to help us get into the minds of users and bring them into the design process in a helpful, meaningful way. Methods such as concept testing, participatory design and, of course, usability testing all have their place. But the real power lies in using not just one or two of these methods, but three or more. This is where triangulation comes in:

Triangulation is a powerful technique that facilitates validation of data through cross verification from more than two sources. In particular, it refers to the application and combination of several research methodologies in the study of the same phenomenon.

Using multiple data sources — both qualitative and quantitative — is a great way to avoid any myopia traps along the way. In addition to (or instead of, depending on the project) the two methodologies covered above, you should use as many appropriate techniques as possible to help confirm your intuition and direction.

As Catriona Cornett points out in “Using Multiple Data Sources and Insights to Aid Design”:

When used correctly, data from multiple sources can allow us to better identify the context in which our designs live. It can help us validate our assumptions and approach design with confidence and not subjective opinion. This not only helps to create better design, but also helps us achieve that all-important buy-in from stakeholders. It’s easier to defend a design when you have deep, rich insights to back it up.

The first response I get when proposing triangulation (or sometimes even just one research method) is usually, “We don’t have time!” The good news is that this doesn’t have to slow you down — even an hour at a coffee shop observing real users with your product will shock you out of your myopia. The only thing that’s not an option is skipping research completely.

Summary

User research and the techniques discussed in this article aren’t new, but they’re usually left to specialist researchers to champion, or they’re swept under the rug because “We’re using established UI patterns on this one.” Hopefully, this article has shown that designer myopia is too common and too dangerous to ignore or to be left to specialist researchers to fix. Sure, user researchers are critical to ensuring that a proper methodology is followed, but we can all get out there and use the data and information available to us to make sure we don’t put too much of our own viewpoints into our designs.

Web design is personal — deeply personal. As Alex Charchar puts it in his gut-wrenching essay for The Manual:

I now know that it is through love and passion and happiness that anything of worth is brought into being. A fulfilled and accomplished life of good relationships and craftsmanship is how I will earn my keep.

I doubt that any of us would disagree with those words. Our best work happens when we throw ourselves wholeheartedly into it. But this outlook on life and design comes with its own dangers that we need to watch out for. And the biggest danger is in being unable to see beyond our own passion and taste and, with the best intentions, in failing to make the necessary connections with our users.

My hope for all of us is that the three simple guidelines discussed here — contextual user research, designing to blend in, and research triangulation — will enable us to keep the perspective we need as we throw everything we’ve got at the design problems that we have to solve every day.

(al) (il)

© Rian van der Merwe for Smashing Magazine, 2012.

0
Your rating: None

Here's an upcoming action puzzler that with a dark side - you are A Virus Named Tom, and you've been released into a world of robots and technology to shut them all down. Via a series of grids, you're left to switch and flip pieces around, creating a clear path for the virus to spread.

While the first batch of levels appear to be pretty simple - at least from what the trailer shows anyway - the game soon ramps up the heat and you'll need to plan your routes appropriately if you want to dodge all the security systems that are trying to end your life. The most intriguing aspect for me is the multiplayer, as both the co-op and versus elements look like a ton of fun. Let's just hope it's online multiplayer and not just local!

The game is penned for a release later this year, and the dev team hopes to fire out PC, XBLA and PSN versions. Check out the official site for more details, and jump below the cut for a video that shows the multiplayer in a little more depth.

2
Your rating: None Average: 2 (1 vote)