Skip navigation
Help

Technical communication

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Programmer Steve Losh has written a lengthy explanation of what separates good documentation from bad, and how to go about planning and writing documentation that will actually help people. His overarching point is that documentation should be used to teach, not to dump excessive amounts of unstructured information onto a user. Losh takes many of the common documentation tropes — "read the source," "look at the tests," "read the docstrings" — and makes analogies with learning everyday skills to show how silly they can be. "This is your driving teacher, Ms. Smith. ... If you have any questions about a part of the car while you’re driving, you can ask her and she’ll tell you all about that piece. Here are the keys, good luck!" He has a similar opinion of API strings: "API documentation is like the user’s manual of a car. When something goes wrong and you need to replace a tire it’s a godsend. But if you’re learning to drive it’s not going to help you because people don’t learn by reading alphabetized lists of disconnected information." Losh's advice for wikis is simple and straightforward: "They are bad and terrible. Do not use them."

0
Your rating: None
Original author: 
Soulskill

angry tapir writes "Researchers at Microsoft Research have produced a prototype software system that can be used on smartphones to infer a user's mood. The 'MoodScope' system produced by researchers uses smartphone usage patterns to determine whether someone is happy, calm, excited, bored or stressed and could potentially add a new dimension to to mobile apps (as well as, as the researchers note, open up a Pandora's Box of privacy issues). The researchers created a low-power background service for iPhones and Android handsets that (with training) can offer reasonable detection of mood and offers and API that app developers could hook into."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None
Original author: 
Cesar Torres


Tumblr Creative Director Peter Vidani

Cesar Torres

New York City noise blares right outside Tumblr’s office in the Flat Iron District in Manhattan. Once inside, the headquarters hum with a quiet intensity. I am surrounded by four dogs that employees have brought to the workspace today. Apparently, there are even more dogs lurking somewhere behind the perpendicular rows of desks. What makes the whole thing even spookier is that these dogs don’t bark or growl. It’s like someone’s told them that there are developers and designers at work, and somehow they’ve taken the cue.

I’m here to see Tumblr’s Creative Director Peter Vidani who is going to pull the curtain back on the design process and user experience at Tumblr. And when I say design process, I don’t just mean color schemes or typefaces. I am here to see the process of interaction design: how the team at Tumblr comes up with ideas for the user interface on its website and its mobile apps. I want to find out how those ideas are shaped into a final product by their engineering team.

Back in May, Yahoo announced it was acquiring Tumblr for $1.1 billion. Yahoo indicated that Tumblr would continue to operate independently, though we will probably see a lot of content crossover between the millions of blog posts hosted by Tumblr and Yahoo’s search engine technology. It’s a little known fact that Yahoo has provided some useful tools for UX professionals and developers over the years through their Design Pattern Library, which shares some of Yahoo’s most successful and time-tested UI touches and interactions with Web developers. It’s probably too early to tell if Tumblr’s UI elements will filter back into these libraries. In the meantime, I talked to Vidani about how Tumblr UI features come to life.

Read 9 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Thomas Joos

  

As a mobile UI or UX designer, you probably remember the launch of Apple’s first iPhone as if it was yesterday. Among other things, it introduced a completely touchscreen-centered interaction to a individual’s most private and personal device. It was a game-changer.

Today, kids grow up with touchscreen experiences like it’s the most natural thing. Parents are amazed by how fast their children understand how a tablet or smartphone works. This shows that touch and gesture interactions have a lot of potential to make mobile experiences easier and more fun to use.

Challenging Bars And Buttons

The introduction of “Human Interface Guidelines” and Apple’s App Review Board had a great impact on the quality of mobile applications. It helped a lot of designers and developers understand the core mobile UI elements and interactions. One of Apple’s popular suggestions, for instance, is to use UITabBar and UINavigationBar components — a guideline that many of us have followed, including me.

In fact, if you can honestly say that the first iPhone application you designed didn’t have any top or bottom bar elements, get in touch and send over a screenshot. I will buy you a beer and gladly tweet that you were ahead of your time.

My issue with the top and bottom bars is that they fill almost 20% of the screen. When designing for a tiny canvas, we should use every available pixel to focus on the content. In the end, that’s what really matters.

In this innovative industry, mobile designers need some time to explore how to design more creative and original interfaces. Add to that Apple’s frustrating rejection of apps that “think outside the box,” it is no surprise that experimental UI and UX designs such as Clear and Rise took a while to see the light of day. But they are here now. And while they might be quite extreme and focused on high-brow users and early adopters, they show us the great creative potential of gesture-driven interfaces.

Rise and Clear
Pulling to refresh feels very intuitive.

The Power Of Gesture-Driven Interfaces

For over two years now, I’ve been exploring the ways in which gestures add value to the user experience of a mobile application. The most important criterion for me is that these interactions feel very intuitive. This is why creative interactions such as Loren Brichter’s “Pull to Refresh” have become a standard in no time. Brichter’s interaction, introduced in Tweetie for iPhone, feels so intuitive that countless list-based applications suddenly adopted the gesture upon its appearance.

Removing UI Clutter

A great way to start designing a more gesture-driven interface is to use your main screen only as a viewport to the main content. Don’t feel obliged to make important navigation always visible on the main screen. Rather, consider giving it a place of its own. Speaking in terms of a virtual 2-D or 3-D environment, you could design the navigation somewhere next to, below, behind, in front of, above or hidden on top of the main view. A dragging or swiping gesture is a great way to lead the user to this UI element. It’s up to you to define and design the app.

What I like about Facebook and Gmail on iOS, for instance, is their implementation of a “side-swiping” menu. This trending UI concept is very easy to use. Users swipe the viewport to the right to reveal navigation elements. Not only does this make the app very content-focused, but accessing any section of the application takes only two to three touch interactions. A lot of apps do far worse than that!

Sideswipe Menu
Facebook and Gmail’s side-swiping menu

In addition to the UI navigation, your app probably also supports contextual interactions, too. Adding the same two or three buttons below every content item will certainly clutter the UI! While buttons might seem to be useful triggers, gestures have great potential to make interaction with content more intuitive and fun. Don’t hesitate to integrate simple gestures such as tapping, double-tapping and tapping-and-holding to trigger important interactions. Instagram supports a simple double-tap to perform one of its key features, liking and unliking a content item. I would not be surprised to see other apps integrate this shortcut in the near future.

An Interface That Fits

When designing an innovative mobile product, predicting user behavior can be very difficult. When we worked with Belgium’s Public Radio, my team really struggled with the UI balance between music visualization and real-time news. The sheer number of contextual scenarios and preferences made it very hard to come up with the perfect UI. So, we decided to integrate a simple dragging gesture to enable users to adjust the balance themselves.

Radio+
By dragging, users can balance music-related content and live news.

This gesture adds a creative contextual dimension to the application. The dragging gesture does not take the user from one section (news or music) to another. Rather, it enables the user to focus on the type of content they are most interested in, without missing out on the other.

Think in Terms of Time, Dimension and Animation

What action is triggered when the user taps an item? And how do you visualize that it has actually happened? How fast does a particular UI element animate into the viewport? Does it automatically go off-screen after five seconds of no interaction?

The rise of touch and gesture-driven devices dramatically changes the way we design interaction. Instead of thinking in terms of screens and pages, we are thinking more in terms of time, dimension and animation. You’ve probably noticed that fine-tuning user interactions and demonstrating them to colleagues and clients with static wireframe screenshots is not easy. You don’t fully see, understand and feel what will happen when you touch, hold, drag and swipe items.

Certain prototyping tools, including Pop and Invision, can help bring wireframes to life. They are very useful for testing an application’s flow and for pinpointing where and when a user might get stuck. Your application has a lot more going on than simple back-and-forth navigation, so you need to detect interface bugs and potential sources of confusion as soon as possible. You wouldn’t want your development team to point them out to you now, would you?

InvisionApp
Invision enables you to import and link your digital wireframes.

To be more innovative and experimental, get together with your client first and explain that a traditional wireframe is not the UX deliverable that they need. Show the value of interactive wireframes and encourage them to include this in the process. It might increase the timeline and budget, but if they are expecting you to go the extra mile, it shouldn’t be a problem.

I even offer to produce a conceptual interface video for my clients as well, because once they’ve worked with the interactive wireframes and sorted out the details, my clients will often need something sexier to present to their internal stakeholders.

The Learning Curve

When designing gesture-based interactions, be aware that every time you remove UI clutter, the application’s learning curve goes up. Without visual cues, users could get confused about how to interact with the application. A bit of exploration is no problem, but users should know where to begin. Many apps show a UI walkthrough when first launched, and I agree with Max Rudberg that walkthroughs should explain only the most important interactions. Don’t explain everything at once. If it’s too explicit and long, users will skip it.

Why not challenge yourself and gradually introduce creative UI hints as the user uses the application? This pattern is often referred to as progressive disclosure and is a great way to show only the information that is relevant to the user’s current activity. YouTube’s Capture application, for instance, tells the user to rotate the device to landscape orientation just as the user is about to open the camera for the first time.

Visual Hints
Fight the learning curve with a UI walkthrough and/or visual hints.

Adding visual cues to the UI is not the only option. In the Sparrow app, the search bar appears for a few seconds, before animating upwards and going off screen, a subtle way to say that it’s waiting to be pulled down.

Stop Talking, Start Making

The iPhone ushered in a revolution in interactive communication. Only five years later, touchscreen devices are all around us, and interaction designers are redefining the ways people use digital content.

We need to explore and understand the potential of touch and gesture-based interfaces and start thinking more in terms of time, dimension and animation. As demonstrated by several innovative applications, gestures are a great way to make an app more content-focused, original and fun. And many gesture-based interactions that seem too experimental at first come to be seen as very intuitive.

For a complete overview of the opportunities for gestures on all major mobile platforms, check out Luke Wroblewski’s “Touch Gesture Reference Overview.” I hope you’re inspired to explore gesture-based interaction and intensify your adventures in mobile interfaces. Don’t be afraid to go the extra mile. With interactive wireframes, you can iterate your way to the best possible experience. So, let’s stop talking and start making.

(al)

© Thomas Joos for Smashing Magazine, 2013.

0
Your rating: None
Original author: 
Todd Hoff

Distributed transactions are costly because they use agreement protocols. Calvin says, surprisingly, that using a deterministic database allows you to avoid the use of agreement protocols. The approach is to use a deterministic transaction layer that does all the hard work before acquiring locks and the beginning of transaction execution.

Overview:
Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today’s systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels—including Paxos based strong consistency across geographically distant replicas—at no cost to transactional throughput.

If you are interested Daniel Abadi gives a very accessible overview of Calvin in If all these new DBMS technologies are so scalable, why are Oracle and DB2 still on top of TPC-C? A roadmap to end their dominance.

0
Your rating: None

.

Original author: 
nobody@flickr.com (neon.tambourine)

neon.tambourine posted a photo:

.

0
Your rating: None
Original author: 
Jim Rossignol


As Eve trundles towards is tenth anniversary, and I baulk with disbelief that it has really been a decade since I quit PC Gamer and spent the summer playing Eve and Planetside 1, CCP have started rolling out celebratory things, including a fantastic space timeline that illustrates the rich backstory of the game’s universe. I was never particularly invested in Eve’s fiction, but it’s impossible to deny the work that CCP put into it, with an encyclopaedia of short stories and even a few novels.

Ten years! I put in five. You can read about them here. I wish I could go back. I miss you, Statecorp.

0
Your rating: None

GCirc01E-025

Update: In my eagerness to announce these workshops I made a scheduling error, incorrectly thinking the dates would be March 15+16 rather than 16+17. As a result I need to move one of the workshops to the weekend before, and since the Intro workshop should happen before the Advanced the new dates will be:

  • Saturday March 9: Introduction to Processing and Generative Art
  • Saturday March 16: Generative Art, Advanced Topics

Sorry for the confusion! On the plus side the Intro workshop might now be a smaller group which should make it nice and intimate.

I haven’t done any workshops in New York since November, so I have decided to offer my Intro and Advanced Generative Art workshops back-to-back the weekend of March 16+17 on consecutive weekends, Saturday March 9 and Saturday March 17.

The venue will be my apartment in comfortable Park Slope, Brooklyn. As usual I have 8 spots available for each workshop, they do tend to reach capacity so get in touch sooner rather than later. Reservation is by email and your spot is confirmed once I receive payment via PayPal.

The workshops will be taught using the most recent Processing 2.0 beta version (2.0b8 as of this moment), and as usual I will be using my own Modelbuilder library as a toolkit for solving the tasks we look. Familiarizing yourself with Processing 2.0 and Modelbuilder would be good preparation.

Make sure to download Modelbuilder-0019 and Control-P5 2.0.4, then run through the provided examples. Check OpenProcessing.org for more Modelbuilder examples.

Note about dataviz: I know there is a lot of interest in data vizualization and I do get asked about that frequently in workshops. I can’t promise to cover data in detail since it’s a pretty big topic.

If you’re specifically looking for data techniques I would recommend looking at the excellent workshops series taught by my friend Jer Thorp. He currently offers two such workshops, titled “Processing and Data Visualization” and “Archive, Text, & Character(s)”.

0
Your rating: None