The Internet Archive’s Wayback Machine is deceptively simple — plug in a website and you can see copies of it over time.
What you don’t see is the massive amount of effort, data and storage necessary to capture and maintain those archives. Filmmaker Jonathan Minard’s documentary Internet Archive takes a behind the scenes look at how (and why) the Internet Archive’s efforts are preserving the web as we know it.
The interview with Brewster Kahle, founder of the Internet Archive, especially offers a look at not just the idea behind the archive, but the actual servers that hold the 10 petabytes of archived websites, books, movies, music, and television broadcasts that the Internet Archive currently stores.
Researchers have developed a computational model that can predict video game players’ in-game performance and provide a corresponding challenge they can beat, leading to quicker mastery of new skills. The advance not only could help improve user experiences with video games but also applications beyond the gaming world.
Apparently Women Love This 13-Year-Old Skateboarder Named Baby Scumbag
Steven Fernandez, aka Baby Scumbag, is just a normal 13-year-old skater from a bad neighborhood in LA. A normal 13-year-old skater who’s sponsored by a bunch of companies, has 38,000 subscribers onFacebook and 140,000 followers onInstagram, and gets photographed with guns and sexy (adult) women. He’s been skating since he was nine (here’s a video of him at 11), but unlike other absurdly talented kids likeRene Serrano and Evan Doherty, he’s developed a whole persona that revolves around trying to get girls and eating junk food (again: typical 13-year-old). It’s hard to tell how much of that is him putting on an act and how much of that is real, but either way, young Stephen knows more about what people on the internet like than all the “social media gurus” two and three times his age put together. I called him to ask what he wants to be when he grows up.
VICE: Hey, Steven how’s it going? I didn’t force you to miss school, right?
Baby Scumbag: Hey, VICE lady. Just chillin’. Just got home from school. Got out a little early.
You like school, or what?
Yeah, school is cool, but it’s kind of tough out here in poverty. You see a lot bad stuff around here, like gang-related stuff, drugs. I live in Compton, California. The border of South Central.
So, you’re super popular at school, right?
Nah, I’m just a normal kid going to school. An average teenager.
How did you get start getting sponsored?
Well it all started when I had posted a video of skateboarding, and people actually enjoyed watching the video. As I started making more videos, I started getting more sponsors as well.
What’s a typical day in the life of Baby Scumbag?
Hang out at school, homework, skateboarding, maybe even go film. And a little masturbation.
Hybrids. Image: Screenshot/Webmonkey.
The advent of hybrid laptops that double as tablets or offer some sort of touch input has greatly complicated the life of web developers.
A big part of developing for today’s myriad screens is knowing when to adjust the interface, based not just on screen size, but other details like input device. Fingers are far less precise than a mouse, which means bigger buttons, form fields and other input areas.
But with hybrid devices like touch screen Windows 8 laptops or dockable Android tablets with keyboards, how do you know whether the user is browsing with a mouse or a finger?
Over on the Mozilla Hacks blog Patrick Lauke tackles that question in an article on detecting touch-capable devices. Lauke covers the relatively simple case of touch-only, like iOS devices, before diving into the far more complex problem of hybrid devices.
Lauke’s answer? If developing for the web hasn’t already taught you this lesson, perhaps hybrid devices will — learn to live with uncertainty and accept that you can’t control everything.
What’s the solution to this new conundrum of touch-capable devices that may also have other input methods? While some developers have started to look at complementing a touch feature detection with additional user agent sniffing, I believe that the answer – as in so many other cases in web development – is to accept that we can’t fully detect or control how our users will interact with our web sites and applications, and to be input-agnostic. Instead of making assumptions, our code should cater for all eventualities.
While learning to live with uncertainty and providing interfaces that work with any input sounds nice in theory, developers are bound to want something a bit more concrete. There’s some hope on the horizon. Microsoft has proposed the Pointer Events spec (and created a build of Webkit that supports it). And the CSS Media Queries Level 4 spec will offer a pointer query to see what sort of input device is being used (mouse, finger, stylus etc).
Unfortunately, neither Pointer Events nor Media Queries Level 4 are supported in today’s browsers. Eventually there probably will be some way to easily detect and know for certain which input device is being used, but for the time being you’re going to have to live with some level of uncertainty. Be sure to read through Lauke’s post for more details and some sample code.
Heather Rousseau spent ten days last fall photographing and interviewing people living and working in western Colorado, documenting their relationships with the land, energy and water. “Last summer, Colorado—like much of the rest of the country—saw some of the driest and hottest conditions on record,” recalls Rousseau. “Since 80 percent of the state’s population lives [...]
Learning through music and art: Doug Goodkin at TEDxConejoSalon
Doug Goodkin is an internationally recognized teacher of Orff Schulwerk, a dynamic approach to music education. He is currently in his 38th year at The San F...
Aurich Lawson / Thinkstock
It's hard to believe that just a few decades ago, touchscreen technology could only be found in science fiction books and film. These days, it's almost unfathomable how we once got through our daily tasks without a trusty tablet or smartphone nearby, but it doesn't stop there. Touchscreens really are everywhere. Homes, cars, restaurants, stores, planes, wherever—they fill our lives in spaces public and private.
It took generations and several major technological advancements for touchscreens to achieve this kind of presence. Although the underlying technology behind touchscreens can be traced back to the 1940s, there's plenty of evidence that suggests touchscreens weren't feasible until at least 1965. Popular science fiction television shows like Star Trek didn't even refer to the technology until Star Trek: The Next Generation debuted in 1987, almost two decades after touchscreen technology was even deemed possible. But their inclusion in the series paralleled the advancements in the technology world, and by the late 1980s, touchscreens finally appeared to be realistic enough that consumers could actually employ the technology into their own homes.
This article is the first of a three-part series on touchscreen technology's journey to fact from fiction. The first three decades of touch are important to reflect upon in order to really appreciate the multitouch technology we're so used to having today. Today, we'll look at when these technologies first arose and who introduced them, plus we'll discuss several other pioneers who played a big role in advancing touch. Future entries in this series will study how the changes in touch displays led to essential devices for our lives today and where the technology might take us in the future. But first, let's put finger to screen and travel to the 1960s.
This was the cloud computing startup that originated at NASA, where the original ideas for OpenStack, the open source cloud computing platform, was born. Anso Labs was acquired by Rackspace a little more than two years ago.
It was a small team. But now a lot of the people who ran Anso Labs are back with a new outfit, still devoted to cloud computing, and still devoted to OpenStack. It’s called Nebula. And it builds a turnkey computer that will turn an ordinary rack of servers into a cloud-ready system, running — you guessed it — OpenStack.
Based in Mountain View, Calif., Nebula claims to have an answer for any company that has ever wanted to build its own private cloud system and not rely on outside vendors like Amazon or Hewlett-Packard or Rackspace to run it for them.
It’s called the Nebula One. And the setup is pretty simple, said Nebula CEO and founder Chris Kemp said: Plug the servers into the Nebula One, then you “turn it on and it boots up cloud.” All of the provisioning and management that a service provider would normally charge you for has been created on a hardware device. There are no services to buy, no consultants to pay to set it up. “Turn on the power switch, and an hour later you have a petascale cloud running on your premise,” Kemp told me.
The Nebula One sits at the top of a rack of servers; on its back are 48 Ethernet ports. It runs an operating system called Cosmos that grabs all the memory and storage and CPU capacity from every server in the rack and makes them part of the cloud. It doesn’t matter who made them — Dell, Hewlett-Packard or IBM.
Kemp named two customers: Genentech and Xerox’s research lab, PARC. There are more customer names coming, he says, and it already boasts investments from Kleiner Perkins, Highland Capital and Comcast Ventures. Nebula is also the only startup company that is a platinum member of the OpenStack Foundation. Others include IBM, HP, Rackspace, RedHat and AT&T.
If OpenStack becomes as easy to deploy as Kemp says it can be, a lot of companies — those that can afford to have their own data centers, anyway — are going to have their own clouds. And that is sort of the point.
Root access: How your startup should work with advisors, with David Weekly
Don Dodge interviews David Weekly about how to use advisors for your startup effectively and what kind of contract to have with them. David Weekly founded PB...
Science & Technology
Look Ma, no floats! Image: Abobe
HTML5 and CSS 3 offer web developers new semantic tags, native animation tools, server-side fonts and much more, but that’s not the end of the story. In fact, for developers slogging away in the web design trenches, one of the most promising parts of CSS 3 is still just over the horizon — true page layout tools.
Soon, however, you’ll be able to throw out your floats and embrace a better way — the CSS Flexible Box Model, better known as simply Flexbox. Flexbox enables you to create complex layouts with only a few lines of code — no more floats and “clearfix” hacks.
Perhaps even more powerful — especially for those building responsive websites — the Flexbox
order property allows you to create layouts completely independent of the HTML source order. Want the footer at the top of the page for some reason? No problem, just set your footer CSS to
order: 1;. Flexbox also makes it possible to do vertical centering. Finally.
We’ve looked at Flexbox in the past, but, unfortunately the spec has undergone a serious re-write since then, which renders older code obsolete. If you’d like to get up to speed with the new syntax, the Adobe Developer Blog recently published a guide to working with Flexbox by developer Steven Bradley.
Bradley walks through the process of using Flexbox in both mobile and desktop layouts, rearranging source order and elements to get both layouts working with a fraction of the code it would take to do the same using floats and other, older layout tools. The best way to wrap your head around Flexbox is to see it in action, so be sure to follow the links to Bradley’s demo page using either Chrome, Opera or Firefox 20+.
For some it may still be too early to use Flexbox. Browser support is improving, but obviously older browsers will never support Flexbox, so bear that in mind. Opera 12 supports the new syntax, no prefix necessary. Chrome supports the new syntax, but needs the
-webkit prefix. Like Opera, Firefox 20+ Firefox 22 supports the unprefixed version of the new spec. Prior to v22 (currently in the beta channel), Firefox supports the old syntax. IE 10 supports the older Flexbox syntax. Most mobile browsers support the older syntax, though that is starting to change. [Update: Mozilla developer Daniel Holbert, who is working on the Flexbox code in Firefox, wrote to let me know that the Flexbox support has been pushed back to Firefox 22. Actually the new Flexbox syntax is part of Firefox 20 and up, but until v22 arrives it's disabled by default. You can turn it on by heading to
about:config and searching for
layout.css.flexbox.enabled pref. Set it to true and the modern syntax will work.]
So, as of this writing, only two web browsers really support the new Flexbox syntax, though Firefox will make that three in the next month or so.
But there is a way to work around some of the issues. First off, check out Chris Coyier’s article on mixing the old and new syntaxes to get the widest possible browser support. Coyier’s methods will get your Flexbox layouts working in pretty much everything but IE 9 and below.
If you’re working on a personal site that might be okay — IE 9 and below would just get a simplified, linear layout. Or you could serve an extra stylesheet with some floats to older versions of IE (or use targeted CSS classes if you prefer). That defeats some of the benefits of Flexbox since you’ll be writing floats and the like for IE, but when usage drops off you can just dump that code and help move your site, and the web, forward.