Skip navigation
Help

underlying technology

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Florence Ion

Aurich Lawson / Thinkstock

It's hard to believe that just a few decades ago, touchscreen technology could only be found in science fiction books and film. These days, it's almost unfathomable how we once got through our daily tasks without a trusty tablet or smartphone nearby, but it doesn't stop there. Touchscreens really are everywhere. Homes, cars, restaurants, stores, planes, wherever—they fill our lives in spaces public and private.

It took generations and several major technological advancements for touchscreens to achieve this kind of presence. Although the underlying technology behind touchscreens can be traced back to the 1940s, there's plenty of evidence that suggests touchscreens weren't feasible until at least 1965. Popular science fiction television shows like Star Trek didn't even refer to the technology until Star Trek: The Next Generation debuted in 1987, almost two decades after touchscreen technology was even deemed possible. But their inclusion in the series paralleled the advancements in the technology world, and by the late 1980s, touchscreens finally appeared to be realistic enough that consumers could actually employ the technology into their own homes. 

This article is the first of a three-part series on touchscreen technology's journey to fact from fiction. The first three decades of touch are important to reflect upon in order to really appreciate the multitouch technology we're so used to having today. Today, we'll look at when these technologies first arose and who introduced them, plus we'll discuss several other pioneers who played a big role in advancing touch. Future entries in this series will study how the changes in touch displays led to essential devices for our lives today and where the technology might take us in the future. But first, let's put finger to screen and travel to the 1960s.

Read 51 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Adrianne Jeffries

Img_0616-1200_large

Businessweek has a thoughtful and well-written defense of Bitcoin, the open source virtual currency that approximates cash on the internet. Writer and programmer Paul Ford points out that the Bitcoin experiment, so far, is working: it's actually being exchanged as money, and the underlying technology has proved "to be impeccable and completely functional." He makes a persuasive point that Bitcoin is "no more arbitrary than derivatives or credit default swaps." At the time of this writing, a single Bitcoin is being traded for $91.67 USD (although The Economist, at least, expects the bubble will pop soon).

Unfortunately, Ford also repeats the unsubstantiated claim that Bitcoin's meteoric price rise was sparked by fears over the Cyprus...

Continue reading…

0
Your rating: None

[ See post to watch video ]

Say you want to quickly transfer a file, like a photo or a contact entry, from your smartphone to a friend’s. Most people would email or text the file. But a number of technologies have come along to make the process quicker and simpler.

On some Android phones, you can “beam” files like photos from phone to phone by tapping one phone to another, or bringing them very close. But that requires that both phones have a special chip, called NFC, which isn’t yet universal on Android phones and doesn’t exist at all in iPhones.

Another approach is to use an app called Bump, which transfers files between iPhones and Android phones when those holding them do a sort of sideways fist bump. It works pretty well, but you have to make contact with the other person.

image

With the Xsync iPhone app, you select an audio file, photo, video, contact or calendar appointment by tapping on the simple icon that represents each one.

This week, I’ve been testing a different approach — an iPhone app called Xsync. It doesn’t require any special chip and instead uses a free app and a hardware feature almost every smartphone possesses — the camera. While it is primarily meant, like Bump, for transfers between phones in proximity, it works over long distances. I was able to almost instantly send and get photos, videos and songs using Xsync between two iPhones held up to computer webcams during a Skype video call.

The key to Xsync is the QR code, that square symbol found seemingly everywhere these days—online, in print newspapers and magazines, on posters and other places. These codes typically just contain text—often, a Web address. But Xsync, a tiny company based in Seattle, generates QR codes that initiate the transfer of whole files, or in the case of photos, even groups of files. It has a built-in QR code scanner to read these codes using the phone’s camera.

The biggest drawback to Xsync is that it is currently only available for the iPhone. An Android version is planned for sometime this quarter. Meanwhile, you can use an Android phone with any QR code reader to receive, though not send, files sent via Xsync.

The Xsync app is something of a teaser for the underlying technology, which the company calls the Optical Message Service. The company’s goal isn’t to build its own apps, but to license the technology to cellphone makers so it becomes a built-in way to transfer files.

Here’s how it works. Once you install Xsync on your iPhone, you select an audio file, photo, video, contact or calendar appointment, each of which is represented by a simple icon. The app creates a QR code representing the intended transfer of that file and temporarily sends the file to Xsync’s server. Your friend uses Xsync to scan the QR code you’ve created with his or her iPhone’s camera, and the files are sent to your friend’s iPhone.

In my tests, it was easy, quick and reliable. I successfully used Xsync to send and receive all the included types of files with an iPhone 5, an iPhone 4S and an iPad mini. I was also able to receive files on an Android phone, a Google Nexus 4, via a QR code generated by Xsync.

image

The app generates QR codes that initiate the transfer of whole files, or in the case of photos, even groups of files.

You can even generate a QR code using Xsync that will allow you to transfer money from your PayPal account to another person’s, though that requires an added authentication step for security. But it worked, and would be a good way to, say, split a bill at a restaurant. (This PayPal feature of Xsync doesn’t work with Android, for now.)

The company says the file transfers are secure, for two reasons. First, they are encrypted. More important, each code is generated for a specific transfer and expires after a relatively short time. For instance, codes for photos expire after 24 hours, according to the company.

You can use Xsync to transmit certain kinds of files — including documents — you’ve stored in your Dropbox account, though, oddly, the Xsync app hides this document-transfer feature under an icon for sharing calendar appointments.

And you don’t have to be close to make the transfer. In addition to my Skype example, you can send a QR code generated by Xsync via email or text message, or even post the code to Facebook. Another person can then scan the code to get the file.

Xsync can generate codes that represent either existing files on your phone, or files you create on the spot. If you don’t want to use an existing one, the audio, photo, video and calendar icons in the app invite you to create a new file to be transferred.

On the iPhone, the receiving device displays the transferred files right within the Xsync app. If you’re using an Android phone to receive, you get a Web address that leads you to the file on Xsync’s server.

If you have an iPhone, Xsync is an effective way to transfer files like photos, songs, videos and more between phones.

Email Walt at mossberg@wsj.com.

0
Your rating: None

Edutopia.org - Ask kids what Facebook is for, and they'll tell you it's there to help them make friends. And, on the surface anyway, that's what it looks like. Of course, anyone who has poked a bit deeper or thought a bit longer about it understands that people programming Facebook aren't sitting around wondering how to foster more enduring relationships for little Johnny, Janey and their friends, but rather how to monetize their social graphs -- the trail of data the site is busy accumulating about Johnny and Janey every second of the day and night.

After all, our kids aren't Facebook's customers; they're the product. The real customers are the advertisers and market researchers paying for their attention and user data. But it's difficult for them or us to see any of this and respond appropriately if we don’t know anything about the digital environment in which all this is taking place. That’s why -- as an educator, media theorist and parent -- I have become dedicated to getting kids code literate.

Digital World Ownership

As I see it, code literacy is a requirement for participation in a digital world. When we acquired language, we didn't just learn how to listen, but also how to speak. When we acquired text, we didn't just learn how to read, but also how to write. Now that we have computers, we are learning to use them but not how toprogram them. When we are not code literate, we must accept the devices and software we use with whatever limitations and agendas their creators have built into them. How many times have you altered the content of a lesson or a presentation because you couldn't figure out how to make the technology work the way you wanted? And have you ever considered that the software's limitations may be less a function of the underlying technology than that of the corporation that developed it? Would you even know where to begin distinguishing between the two?

This puts us and our kids -- who will be living in a more digital world than our own -- at a terrible disadvantage. They are spending an increasing amount of their time in digital environments where the rules have been written by others. Just being familiar with how code works would help them navigate this terrain, understand its limitations and determine whether those limits are there because the technology demands it -- or simply because some company wants it that way. Code literate kids stop accepting the applications and websites they use at face value, and begin to engage critically and purposefully with them instead.

Otherwise, they may as well be at the circus or a magic show.

More generally, knowing something about programming makes us competitive as individuals, companies and a nation. The rest of the world is learning code. Their schools teach it, their companies are filled with employees who get it, and their militaries are staffed by programmers -- not just gamers with joysticks. According to the generals I've spoken with, we are less than a generation away from losing our technological superiority on the cyber battlefield, which should concern a nation depending so heavily on drones for security and electronic trading as an industry.

Finally, learning code -- and doing so in a social context -- familiarizes people with the values of a digital society: the commons, collaboration and sharing. These are replacing the industrial age values of secrecy or the hoarding of knowledge. Learning how software is developed and how the ecosystem of computer technology really works helps us understand the new models through which we'll be working and living as a society. It's a new kind of teamwork, and one that's under-emphasized in our testing-based school systems.

Codeacademy

To build my own code literacy, I decided to take free classes through the online website Codecademy.com, and ended up liking it so much that I'm now working with them to provide free courses for kids to learn to code. The lessons I've learned along the way are of value to parents and teachers looking to grow more code literate young people.

1. Learning by Doing

One of Codecademy's key insights was that programming is best taught by doing. Where literature might best be taught through books, coding is best taught in an interactive environment. So instead of just giving students text to read or videos to watch, Codecademy invites them to learn to code by actually making code. Every online lesson involves writing lines of code in an interactive window within the web browser, and then hitting the "run" button and watching those lines actually work. Instant payoff, and an "intrinsic reward."

2. A Stake in the Outcome

Code also makes much more sense to people when it is tied to a real project. People need reasons for learning one skill or another. When students are working to devise a computer adventure game, all of a sudden abstract mathematical functions become immediately relevant.

3. Benefits of Interaction

Finally, while badges and point scores are great for motivating students in the short run, social connections to a real group of cohorts probably matter more for the long haul. Codecademy's first strides in that direction, simple forums, allow users to seek out help from others when they're stuck in a lesson. Meanwhile, those who are mastering a skill find it really sinks in when they have the opportunity to explain things to someone encountering it for the first time. Just as research has shown a heterogeneous classroom benefits those on both ends of the aptitude spectrum, interaction between more and less experienced code learners benefits both.

After-School Adventures

The greatest challenge so far, at least from my end, has been figuring out ways to get these interactive lessons into the schools that need them. Between curriculum standards, overworked faculty and legal restrictions on inviting minors to use websites, it's an uphill battle. To help with these challenges, Codecademy has unveiled an after-school program through which any parent or teacher can teach code to a self-selecting group of interested students.

Codecademy.com/afterschool is basically "Codecademy in a box." It's a year of interactive lesson tracks, specially assembled for an after-school group or club run by an adult with no programming experience. In the fall semester, kids make a website by learning HTML and CSS. In the spring, they build an adventure game by learning Javascript. The beauty of the model is that the adult supervising all this needn't know anything about code in advance. The course materials let you know everything you need to stay a week ahead of the kids, and the rest of the online community is there to help you out if you get stuck.

When I learned about the after-school program, I was compelled to tweet, "No Excuses." That's about the best I can say it. The obstacles to code literacy are getting smaller every day, while the liabilities for ignorance are only getting more profound.

What steps are you taking to bring code literacy into your classroom?

0
Your rating: None

The corporate social web still sucks

Expert Labs, the non-profit organization behind ThinkUp, a web-based data-liberation and analytics application, is rebooting into a commercial entity.

No need to panic if you use ThinkUp to back up your social network life; the application will remain open source and freely available.

But Expert Labs is going away and ThinkUp is refocusing on a larger goal — liberating your online social life from the clutches of corporate web entities.

In its own words the new ThinkUp wants to build “an information network that connects to today’s social networks, but isn’t centralized and dependent on a company or investors.”

That’s not an entirely new idea. Diaspora and some other projects are trying to do the same thing, but ThinkUp is taking a different approach — it wants to build an app first and focus on the user experience rather than the underlying technology.

In fact ThinkUp already is an app that’s pretty close to what it’s aiming to do. ThinkUp is a web-based app that pulls your data out of social silos like Facebook or Twitter and stores it on your own server. You control your own data, and have a record of your conversations potentially long after Facebook, Twitter and the rest have become mere footnotes in the history of the web.

For more on how ThinkUp works and how you can use it be sure to check out our earlier coverage and then grab the code and try it for yourself.

So what of ThinkUp’s new, loftier goals? Is any attempt to replace Facebook doomed to failure? Of course not. Everything is replaceable, just ask MySpace. And ThinkUp believes its approach is different. “Prior attempts have tried to solve this problem based on the assumption that it is a technical challenge,” says ThinkUp’s Knight News Challenge application. “We believe it to be a social one.” ThinkUp’s focus going forward will be on the social and the interface:

We will draw people in through a compelling media site that encourages participation via our decentralized platform… a peer-to-peer network that powers a great media property with broad appeal — imagine if Digg or Reddit were open, decentralized and powered by a network instead of votes.

If you’re curious to know what that might look like, head on over to the ThinkUp proposal for the Knight News Challenge and click the heart icon to “like” it (incidentally if the Knight New Challenge sounds familiar, that might be because it’s also the birthplace of EveryBlock). In the meantime, work on the ThinkUp app continues with a new release that improves the charts and graphs and paves the way for the coming Foursquare support. Check out the ThinkUp GitHub page for more details.

0
Your rating: None

  

Before drawing anything in a browser, ask yourself three questions:

  1. Do you need to support older browsers?If the answer is yes, then your only choice is Raphaël. It handles browsers all the way back to IE 7 and Firefox 3. Raphaël even has some support for IE 6, although some of its underlying technology cannot be implemented there.
  2. Do you need to support Android?Android doesn’t support SVG, so you’ll have to use Paper.js or Processing.js. Some rumors say that Android 4 will handle SVG, but the majority of Android devices won’t support it for years.
  3. Is your drawing interactive?Raphaël and Paper.js focus on interaction with drawn elements through clicking, dragging and touch. Processing.js doesn’t support any object-level events, so responding to user gestures is very difficult. Processing.js can draw a cool animation on your home page, but the other tools are better for interactive applications.

Paper.js, Processing.js and Raphaël are the leading libraries for drawing on the Web right now. A couple of others are up and coming, and you can always use Flash, but these three work well with HTML5 and have the widest support among browser vendors.

Choosing the right framework will determine the success of your project. This article covers the advantages and disadvantages of each, and the information you need to make the best choice.

All of the code in this article is open source and can be run on the demo page that accompanies this article.

Overview

Paper.js
Processing.js
Raphaël

Technology
canvas tag
canvas tag
SVG

Language
PaperScript
Processing script
JavaScript

Browsers
IE 9
IE 9
IE 7

Mobile
Yes
Yes
iOS only

Model
Vector and raster
Raster
Vector

Size
56 KB
64 KB
20 KB

 

It’s all JavaScript once the page runs, but the frameworks take different paths to get there. Raphaël is written directly in JavaScript, but Paper.js uses PaperScript, and Processing.js uses its own script. They all support Firefox, Chrome and Safari, but Internet Explorer is an issue — Paper.js and Processing.js use the canvas tag and thus require IE 9.

PaperScript is a JavaScript extension that makes it possible to write scripts that don’t pollute the global namespace. This cuts down on JavaScript conflicts. PaperScript also supports direct math on objects such as Point and Size: you can add two points together as if they were numbers.

Processing.js is based on a framework named Processing, which runs in the Java Virtual Machine. You define int and float instead of var, and you can use classes with Java-style inheritance. While the Processing.js script looks a little like Java, it’s more like JavaScript and doesn’t require many of the more complex features of Java.

Using all three libraries is easy if you have some familiarity with JavaScript.

Getting Started

Start by importing each library. The process for setting each up is a little different.

Setting Up Paper.js

<head>
<script src="paper.js" type="text/javascript" charset="utf-8"></script>
<script type="text/paperscript" canvas="paperCircle" src="paper_circle.pjs" id="script"></script>
</head>
<body>
<canvas id="paperCircle" class="canvas" width="200" height="200" style="background-color: white;"></canvas>

Paper.js specifies a script type of text/paperscript and the ID of the canvas tag that you’ll draw on. It uses that ID to know where to draw.

Setting Up Processing.js

<head>
<script src="processing.js" type="text/javascript" charset="utf-8"></script>
</head>
<body>
<canvas width="200" height="200" class="canvas" data-processing-sources="processing_circle.java"></canvas>

Processing.js uses the data-processing-sources attribute of the canvas tag to import your drawing. I use a .java extension for Processing’s source file so that my editor color-codes it properly. Some authors use a .pde or .pjs extension. It’s up to you.

Setting Up Raphaël

<head>
<script src="raphael-min.js" type="text/javascript" charset="utf-8"></script>
<script src="raphael_circle.js" type="text/javascript" charset="utf-8"></script>
</head>

Raphaël is imported like any other JavaScript file. It works well with jQuery’s ready function or any other JavaScript framework.

Now we can start drawing.

Object-Oriented Drawing

Both Paper.js and Raphaël use object-oriented drawing: you draw a circle and get back a circle object. Processing.js draws the circle and doesn’t give you anything back. The following simple example makes it clear. Let’s start with a circle in the middle of the screen at point 100,100.

Paper.js:

var circle = new Path.Circle(new Point(100, 100), 10);
circle.fillColor = '#ee2a33';

Raphaël:

var paper = Raphael('raphaelCircle', 200, 200);
var c = paper.ellipse(100, 100, 10, 10);
c.attr({'fill': '#00aeef', 'stroke': '#00aeef'});

Processing.js:

void setup() {
   size(200, 200);
}

void draw() {
   background(#ffffff);
   translate(100, 100);
   fill(#52b755);
   noStroke();
   ellipse(0, 0, 20, 20);
}

Each code snippet draws the same circle. The difference is in what you can do with it.

Paper.js creates the circle as a path object. We can hold onto the object and change it later. In Paper.js, circle.fillColor = 'red'; fills our circle with red, and circle.scale(2) makes it twice as big.

Raphaël follows Paper.js’ object-oriented model. In Raphaël, we can change the color of our circle with circle.attr('fill', 'red');, and scale it up with circle.scale(2, 2);. The point is that the circle is an object that we can work with later.

Processing.js doesn’t use objects; the ellipse function doesn’t return anything. Once we’ve drawn our circle in Processing.js, it’s part of the rendered image, like ink on a page; it’s not a separate object that can be changed by modifying a property. To change the color, we have to draw a new circle directly on top of the old one.

When we call fill, it changes the fill color for every object we draw thereafter. After we call translate and fill, every shape will be filled with green.

Because functions change everything, we can easily end up with unwanted side effects. Call a harmless function, and suddenly everything is green! Processing.js provides the pushMatrix and popMatrix functions to isolate changes, but you have to remember to call them.

Processing.js’ no-objects philosophy means that complex drawings run faster. Paper.js and Raphaël contain references to everything you draw, and so the memory overhead created by complex animations will slow down your application. Processing.js contains no references to drawn elements, so each shape takes up a tiny amount of memory. Memory overhead pays off if you need to access an object later, but it’s overkill if you don’t. Paper.js gives you a way out of this with the Symbol object and by rasterizing objects, but you have to plan ahead to keep the app running fast.

The object-oriented versus no-objects philosophy has implications for everything you do with these libraries. It shapes the way each library handles animations.

Let’s Make It Move

Rotating circles aren’t very interesting, so we’ll make a square rotate around a circle.

Animation in Processing.js

Processing.js supports animation with the predefined setup and draw functions, like this:

float angle = 0.0;
void setup() {
   size(200, 200);
   frameRate(30);
}

void draw() {
   background(#ffffff);
   translate(100, 100);
   fill(#52b755);
   noStroke();
   ellipse(0, 0, 20, 20);

   rotate(angle);
   angle += 0.1;
   noFill();
   stroke(#52b755);
   strokeWeight(2);
   rect(-40, -40, 80, 80);
}

The setup function is called once when the application starts. We tell Processing.js to animate with a frame rate of 30 frames per second, so our draw function will be called 30 times every second. That rate might sound high, but it’s normal for making an animation look smooth.

The draw function starts by filling in the background of the canvas; it paints over anything left over from previous invocations of the draw function. This is a major difference with Processing.js: we are not manipulating objects, so we always have to clean up previously drawn shapes.

Next, we translate the coordinate system to the 100,100 point. This positions the drawing at 100 pixels from the left and 100 pixels from the top of the canvas for every drawing until we reset the coordinates. Then, we rotate by the specified angle. The angle increases with every draw, which makes the square spin around. The last step is to draw a square using the fill and rect functions.

The rotate function in Processing.js normally takes radians instead of degrees. That’s why we increase the angle of each frame by 0.2, rather than a higher number such as 3. This is one of many times when trigonometry shows up in this method of drawing.

Animation in Paper.js

Paper.js makes this simple animation easier than in Processing.js, with a persistent rectangle object:

var r;

function init() {
   var c = new Path.Circle(new Point(100, 100), 10);
   c.fillColor = '#ee2a33';

   var point = new Point(60, 60);
   var size = new Size(80, 80);
   var rectangle = new Rectangle(point, size);
   r = new Path.Rectangle(rectangle);
   r.strokeColor = '#ee2a33';
   r.strokeWidth = 2;
}

function onFrame(event) {
   r.rotate(3);
}

init();

We maintain the state of our square as an object, and Paper.js handles drawing it on the screen. We rotate it a little for each frame. Paper.js manages the path, so we don’t have to redraw everything for each frame or keep track of the angle of rotation or worry about affecting other objects.

Animation in Raphaël

Animations in Raphaël are written in standard JavaScript, so Raphaël doesn’t have specific functions for handling animation frames. Instead, we rely on JavaScript’s setInterval function.

var paper = Raphael('raphaelAnimation', 200, 200);
var c = paper.ellipse(100, 100, 10, 10);
c.attr({
   'fill': '#00aeef',
   'stroke': '#00aeef'
});

var r = paper.rect(60, 60, 80, 80);
r.attr({
   'stroke-width': 2,
   'stroke': '#00aeef'
});

setInterval(function() {
   r.rotate(6);
}, 33);

Raphaël is similar to Paper.js in its object-oriented approach. We have a square, and we call a rotate function on it. Thus, we can easily spin the square with a small amount of code.

Interaction

Raphaël shines when you need to enable interactivity in a drawing. It provides an event model similar to JavaScript’s, making it easy to detect clicks, drags and touches. Let’s make our square clickable.

Interactions With Raphaël

var paper = Raphael('raphaelInteraction', 200, 200);
var r = paper.rect(60, 60, 80, 80);
r.attr({'fill': '#00aeef', 'stroke': '#00aeef'});

var clicked = false;

r.click(function() {
   if (clicked) {
      r.attr({'fill': '#00aeef', 'stroke': '#00aeef'});
   } else {
      r.attr({'fill': '#f00ff0', 'stroke': '#f00ff0'});
   }
   clicked = !clicked;
});

The click function in Raphaël works like jQuery, and you can add it to any object. Once we get the click event, changing the color of the square is easy. Raphaël has more functions to support dragging, hovering and all of the other user interactions you expect from JavaScript.

Interactions With Paper.js

Paper.js has a different way of managing interactions, but it’s still pretty easy:

var hitOptions = {
   fill: true,
   tolerance: 5
};

function init() {
   var point = new Point(60, 60);
   var size = new Size(80, 80);
   var rectangle = new Rectangle(point, size);
   r = new Path.Rectangle(rectangle);
   r.fillColor = '#ee2a33';
}

function onMouseUp(event) {
   var hitResult = project.hitTest(event.point, hitOptions);

   if (hitResult && hitResult.item) {
      if (hitResult.item.clicked) {
         hitResult.item.fillColor = '#ee2a33';
      } else {
         hitResult.item.fillColor = '#f00ff0';
      }

      hitResult.item.clicked = !hitResult.item.clicked;
   }
}

init();

Paper.js deals with mouse gestures through a concept called “hit testing.” A hit finds the point under the mouse cursor and figures out which object it lies above. Hit options enable you to define how the hit works: you can set options for such things as how close the mouse has to be, and whether the middle of the object counts or only the edge. We can extend this hit test to any object or group of objects in Paper.js.

The Paper.js team added object-level events similar to Raphaël’s a few weeks ago. The events should show up in the next release.

Interactions With Processing.js

Processing.js makes detecting mouse clicks tricky. It doesn’t support object-level events or hit testing, so we’re pretty much on our own.

float bx;
float by;
int bs = 20;
boolean bover = false;
boolean clicked = false;

void setup() {
   size(200, 200);
   bx = width/2.0;
   by = height/2.0;
   noStroke();
   fill(#52b755);
   frameRate(10);
}

void draw() {
   background(#ffffff);

   // Test if the cursor is over the box
   if (mouseX > bx-bs && mouseX < bx+bs &&        mouseY > by-bs && mouseY < by+bs) {
      bover = true;
   } else {
      bover = false;
   }

   translate(100, 100);
   rect(-40, -40, 80, 80);
}

void mousePressed() {
   if (bover) {
      if (clicked) {
         fill(#52b755);
      } else {
         fill(#f00ff0);
      }
      clicked = !clicked;
   }
}

Once Processing.js draws the square, it forgets about it. We want the color of the square to change when we click on it, but the script doesn’t know that, so we have to do all of the calculations ourselves. The draw function detects the mouse cursor’s position and does the math to determine whether it lies within the square.

The code is not too bad for the square, but our circle would need πr2. And more complex shapes such as ovals, curves and compound shapes would require even more math.

No Clear Winner

Each framework has its advantages. Between them, the features make for cool demos and even cooler applications.

Showing Off Paper.js

Paper.js excels at manipulating complex shapes. It can turn, twist and transform any object in hundreds of ways. These transforms make it easy to convert objects based on interactive gestures. The new Google Music Tour, which makes colored lines beat in time to music, shows how one can make complex changes on simple shapes.

The other wow factor in Paper.js is its support of raster graphics. Paper.js can completely change the way images are drawn — including by turning them into spirals and Q*bert boards.

Showing Off Processing.js

Processing.js’ biggest feature is speed, making it possible to draw complex animations on slower machines. Many examples are out there, but the fluidity of Processing.js animations shows up best in Ricardo Sánchez’s koi pond.

The swishing of the tails and waving of the bodies make the koi look very natural. Processing.js makes this easy, with support for curves and customized animations.

Processing.js also supports complex drawing elements such as shading, lighting and 3-D transforms. If you want to create complex animations in canvas very quickly, then Processing.js is the clear winner.

Showing Off Raphaël

The best feature of Raphaël is its support for Internet Explorer 7 and 8. If your application has to run on older browsers, then Raphaël is the only option.

The other big feature of Raphaël is its community. Raphaël is older than Paper.js and Processing.js and thus has had more time to build examples, tutorials and user support. It has built-in support for easing, animation transforms and the event handlers that we saw in the interaction example; it also has a comprehensive charting library.

Raphaël also has the best tooling support.

The Tools

If you’ve worked with Flash, the lack of tools for these frameworks will disappoint you. Many of the frameworks will edit SVG images, but none of them offer a drag-and-drop method for creating applications.

A few simple tools are out there, but they are more like proofs of concept than actual products. Adobe is working on a tool named Edge, but it has a long way to go.

If you want to drag and drop, then Web animations aren’t for you yet. Right now, this method of drawing is more like video-game programming. Writing code to draw a circle is tougher than clicking and dragging, but it scales to more complex applications and some fun stuff.

Let’s Build Something Real

So far, we’ve looked at some simple examples, seen the best features of each platform and looked at how to choose the right one. Each framework has pluses and minuses, but judging them is difficult until you create an actual application.

To compare each framework, I’ve drawn some gears. Each gear is made up of two circles, with a set of teeth around the outer circle.

When the shapes are all given the same color, they look just like a gear.

Every gear will rotate a little with each frame of the animation. The first gear will be given a speed, and the rest will move relative to it. The gears will arrange, mesh and rotate together with a crazy amount of trigonometry. Put them together and you’ve got a complex gear system.

Paper.js:

Processing.js:

Raphaël:

Well, that wasn’t quite Raphaël. The rotate function work different in Raphaël than it does in Paper.js and Processing.js. Raphaël doesn’t support rotation around a fixed point. Instead, the teeth of the gears are drawn and redrawn independently, and they fly through the air instead of rotating around the center. The only way to really turn the gear would be to draw the entire gear as a single path, and that takes more math than I’m willing to write. If anyone wants to give it a try, everything is open source.

The Future Of Web Drawing

We gamble on every new technology that we learn: we hope that it catches on and that our investment pays off. Technologies rise and fall on their respective merits, but other factors comes into play, such as vendor support and business uses. The future of our industry is almost a guessing game.

Right now, Flash looks like a bad investment. Flash has great tools, years of development and a large community, but even Adobe is moving away from it.

SVG is in a similar situation. Browsers support it now, but it isn’t getting a lot of attention.

Every browser vendor is working hard to render canvas faster, to use hardware acceleration and to better support libraries such as Paper.js and Processing.js. All mobile devices support canvas, and their developers are working to improve it.

(al)

© Zack Grossbart for Smashing Magazine, 2012.

0
Your rating: None

Trailrunner7 writes "In the last couple of years, Google and some other Web giants have moved to make many of their services accessible over SSL, and in many cases, made HTTPS connections the default. That's designed to make eavesdropping on those connections more difficult, but as researchers have shown, it certainly doesn't make traffic analysis of those connections impossible. Vincent Berg of IOActive has written a tool that can monitor SSL connections and make some highly educated guesses about the contents of the requests going to Google Maps, specifically looking at what size the PNG files returned by Google Maps are. The tool then attempts to group those images in a specific location, based on the grid and tile system that Google uses to construct its maps."


Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

I just posted my thoughts on the Flash Player team blog, about the recent announcements we have made regarding Flash Player support on mobile browsers.

As a long time Flash developer who loves Flash, I can tell you that what is happening right now is a good thing.

First, we are making bold moves like stopping the development of the browser plug-in on mobile browsers in favor of investing further in Flash-based apps packaged with AIR. Playing existing content sounds like a great idea on paper, but we know it doesn't always work that way -- you need to author for mobile and think for mobile, but from talking to customers and looking at content today, we realize that very few people are targeting the plug-in on mobile browsers.

Flash developers have always created some of the most stunning, immersive, emotional experiences on the web. They've always pushed the cutting edge, with few restrictions. But mobile is different, and developers need to adapt to different constraints and affordances. Flash lets you do that, whether you are taking advantage of efficient hardware accelerated video playback or native support for features like multitouch and accelerometers. But it's costly to create beautiful experiences optimized for mobile browsers — a cost that doesn't make sense if people using one of the most popular mobile platforms can't see the content you create.

Existing content for desktops didn't always look as magical on phones as people were used to seeing with Flash Player on their desktops. Content optimized for desktops with big screens and beefy processors can’t look as good on a phone or a tablet it was never designed for. This really had an impact on the trust that people had in Flash, and this perception made it hard to start new projects optimized for mobile browsers. There was just no appetite to even try doing this.

In contrast, you guys create super nice Flash-based apps packaged with AIR and delivering them to app stores across iOS, Android, and BlackBerry devices – by the end of this year, you will be able to reach over 350 million tablets and smartphones. Have you seen an article from a journalist saying that MachinariumComb over Charlie, or TweetHunt are horrible ? No, people love those games. Your work fits the trend the entire industry is seeing: even as we're excited about improvements in mobile browsers, the most compelling, immersive experiences for mobile devices are delivered through apps, optimized from the ground up for mobile. We're helping you guys leverage your talent – the same skills in ActionScript and tooling – to reach that huge, growing market of smartphone and tablet users with amazing apps. Flash makes it possible for developers who craft beautiful desktop experiences to deliver great mobile app experiences. We are going to really focus on that, creating the best solution to build stunning interactive content, games, and video apps across all screens.

Flash Player on the desktop continues to show a path for the consistent, super duper experiences that are impossible to deliver to over a billion people with any other technology. For example, Flash Player 11 was released only a month ago, and it now enables fluid, cinematic hardware accelerated 2D and 3D visuals for more people on the web than any other technology. Flash Player uniquely does for the desktop what apps do for phones and tablets: it helps ensure that what you imagine is exactly what your users will see. Flash Player remains the best technology for delivering premium experiences on the desktop, period. Focusing helps us make sure that we continue to drive that continued innovation.

We are not stepping out of the mobile space with Flash, we are just focusing on what makes sense and where Flash looks great, standalone apps with AIR.

In the long term, we're actively working on an ambitious future for Flash. The implementation details may change, as we've been talking about today. We believe that the DNA of Flash doesn't reside in those implementation details, but in our promise to make it easy to create and deliver the most amazing experiences everywhere. We're focusing on fulfilling that promise, and we’re excited to see what the future – and our community – will bring.

0
Your rating: None