Skip navigation
Help

carousel

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

This week, I'm reporting from the Aquarius undersea research base in Key Largo, Florida. The habitat is the world's last undersea research base. Because NOAA is pulling funding from the 22 year old facility in September, this week's mission is its last scheduled one.

This is a video of oceanographer and National Geographic Explorer-in-Residence Sylvia Earle that was taken a day or two ago. She's being filmed on Aquarius a Red Camera that is in a waterproof housing tethered to an internet connection in the base. Sylvia's helmet, which is a custom variation of a helmet that working divers use, is equipped with a point of view camera and audio comms. The entire thing was streamed over Ustream a few days ago. This section of the video is of her answering the broad and simple question--Why should we care about the ocean?

The answer she gives above is, in typical Earle style, heartbreaking. The oceans have been in trouble for quite awhile now, but the video above is taking place only because Sylvia is trying to stand up for not only the oceans this week, but the Aquarius habitat itself, which she believes is a critical tool and last of its kind for ocean scientists and the ocean itself.

When the base shuts down, the world will lose its only publically funded saturation diving facility, which is not beneficial to science for three main reasons: In Aquarius, scientists can conduct undersea experiments that are too intricate or dependent on direct observation for robots. And scientists can also stay in deep water 9-10x the time a scuba diver can because Aquanauts never have to surface and risk decompression sickness at the end of a day. Lastly, because the data from the reef has been coming in for the last 20 so years, it serves as a constant yardstick for the health of the oceans in general. That data flow should not be interrupted.

The other thing that is super confusing about the decision to pull the plug on Aquarius's parent program, the National Undersea Research Program (AKA NURP) is that the Hawaii Undersea Research Lab (AKA HURL) under NURP is also being shut down.

While Wood's Hole's Alvin is being recommissioned, the Piscese subs are our only two subs capable of taking man to the depth of 2,000 meters. I spent a lot of time this Spring hanging out on the pier where the Pisces subs are located on the windward side of Hawaii. I was told by Terry Kerby, the longtime pilot of the subs, that the viewports on Pisces IV and V, which are pointed forward and not down as on Navy designed subs which are meant to cruise in the midwater instead of depth, are our two best observational subs.

Meanwhile, China, France, India, Russia and others are building subs capable of 6,000 meters, and James Cameron, Richard Branson and other visionaries of the deep are spending their own money to bring man to the deep.

NOAA is cutting programs largely because of rising costs of weather satellites which are critically important to millions, especially after Katrina. But these satellites cost over $800m and Aquarius and HURL's subs cost $5m total per year, to run. Some public schools cost more than this to run.

It's confusing to me why this is a good idea to handicap the very machines that let us understand the ocean as human beings and not just data collecting bots. Perception that comes from peripheral vision, or the heat felt from a hydro thermal vent, or the inner ear sensations a pilot feels as a strong current jostles a sub are all important.

A lot has been made of the advance of ROVs in the last few years, which are cheaper and more capable than ever. Robot arms can be strong and articulate at the same time. Cameras can see in darker places than our own eyes can. But robots lack the imagination and creativity and intuition that human observers in a habitat or Aquarius can use to create the theories that the data is used to test; they lack the ability to intuit theories which are then backed up by data.

I'm not saying we don't need ROVs. I love ROVs. But asking us to explore the sea without being there is like expecting to explore everest with a telescope.

We have to keep going to the places we seek to understand, to see with our own eyes.

And with that, I am going diving now.

0
Your rating: None

King City collects Brandon Graham's magnificent Tokyo Pop comic serial in one mammoth, $11 (cheap!) trade paperback edition, and man, is that a deal.

Take the sprawling, weird, perverse cityscape of Transmetropolitan, mix in the goofy, punny humor of Tank Girl, add ultraviolent gang warfare, the impending resurrection of a death-god, and a secret society of cat-masters whose feline familiars can serve as super-weapons and tactical material, and you're getting in the neighbourhood of King City.

Graham's black-and-white line drawings have the detail of a two-page spread in MAD Magazine and a little bit of Sergio Argones in their style, if Argones was more interested in drawing the battle-scarred veterans of a Korean xombie war who consume each others' powdered bones to drive away the madness.

Despite the fact that this is a very, very funny story, it manages to be more than a comedy. Joe the cat-master's lost love, Pete the bagman's moral crisis, and Max the veteran's trauma are all real enough to tug at your heart-strings, even as you read the goofy puns off the fine-print labels on the fetishistically detailed illustrations showing King City and its weird and wonderful inhabitants.

JWZ wrote "It's the best comic-book-type thing I've read in quite some time. The trade is a huge phonebook-sized thing and it's awesome." He's right.

King City

(via JWZ)

0
Your rating: None

St Colin and the Dragon is a perfectly great 27-page kids' comic about a dragon that hatches in a faraway kingdom and the dumb things that the residents of the kingdom try to get rid of it. They give it an endless parade of sheep to eat, in the hopes that it will mature, grow wings and fly away. But no such thing happens. So Colin, the king's disgraced ex-squire, decides to join the knights who ride out to challenge it. All the big, tough guys are defeated, but Colin figures out what the dragon really wants and saves the kingdom. And then things get weird. In a good way.

St Colin was created by Philippa Rice, whose long-running My Cardboard Life comic (more aimed at grownups) uses the same torn-paper style that makes St Colin such a treat.

I read St Colin to my four-and-a-half-year-old at bedtime earlier this week, and it's had two re-reruns since, because she loves it. There's also plenty of grown up fun in the humorous and sometimes wry dialogue.

You can buy St Colin on its own for £6.50, or together with the massive, perfect-bound My Cardboard Life book for £15.00, should you want one book for the kid(s) and another for the grownup(s). I certainly recommend both to you.

0
Your rating: None


Jonathan Fetter-Vorm's Trinity is a nonfiction book-length comic for adults about the birth of nuclear weapons. It covers the wartime events that spawned the idea of a nuclear weapons program, the intense period of wrangling that gave rise to the Manhattan Project, the strange scientific town in the New Mexico desert that created the A-bomb, the tactical and political decision-making process that led to the bombing of Hiroshima and Nagasaki, the unspeakable horror experienced by the people in those cities and the existential crises the Nuclear Age triggered for scientists, politicians, and the world at large. Though this is primarily a history book, Trinity is also a pretty good nuclear physics primer, making good use of the graphic novel form to literally illustrate the violence of atoms tearing themselves apart, and the weird, ingenious, improvised mechanisms for triggering and controlling that violence.

I think Trinity is a very good book. It manages to be short and straightforward without being crude or lacking nuance. Fetter-Vorm does a great job of bringing the personalities involved in the bomb's creation to life, and to show the way that human relationships -- as much as physics -- resulted in the bomb's invention and use. He walks a fine, non-partisan line on the need to bomb Hiroshima and Nagasaki, opting instead to lay out the facts in a (to my eye) fair and neutral way that neither argues that the bombing was a necessity, nor that it was a callous whim from a military apparatus that wanted to test out its latest gadget.

More than anything, though, Trinity is unflinching in counting the human cost of the bomb. The pages given over to the aftermath in the bombed cities are, if anything, understated. No gross-outs here. But they manage to convey so much horror that I had to stop reading so I could finish my lunch. Also wrenching, in its own way, is the section on the impact that the news from Japan had on the Trinity scientists and their families. Fetter-Vorm does a credible (and disturbing) job of putting you in the shoes of people who wanted to "end the war," but who found no respite in the war's end, as they struggled with the feeling of blood on their hands.

Trinity illuminates a turning-point in human history, and does so with admirable pace, grace, and skill.

Trinity

(Excerpted from TRINITY: A Graphic History of the First Atomic Bomb, by Jonathan Fetter-Vorm, to be published by Hill and Wang, a division of Farrar, Straus and Giroux, LLC in June 2012. Text copyright © 2012 by Jonathan Fetter-Vorm and Michael Gallagher. Illustrations copyright © 2012 by Jonathan Fetter-Vorm. All rights reserved.)

0
Your rating: None

The foundation of Web security rests on the notion that two very large prime numbers, numbers divisible only by themselves and 1, once multiplied together are irreducibly difficult to tease back apart. Researchers have discovered, in some cases, that a lack of entropy—a lack of disorder in the selection of prime numbers—means by analogy that most buildings on the Web would stand in spite of gale winds and magnitude 10 earthquakes, while others can be pushed over with a finger or a breath. The weakness affects as many as 4 in 1,000 publicly available secured Web servers, but it appears in practice that few to no popular Web sites are at risk.

Why primes are useful for security

Web security relies on generating two large prime numbers with sufficient randomness that it is vanishingly unlikely that any two parties in the world would ever stumble across the same number twice—ever. It's impossible to assure, but uniqueness is nevertheless a requirement of the most nearly universally relied-upon algorithm, RSA, to secure Web browser/server transactions via HTTPS.

Here's how it works. Take two big prime numbers that are also similar in size and multiply them:

858599503 * 879190747 = 754872738416398741

The only legitimate divisors (the factors of the product) of such a number are 1, the number itself, and the two primes. The fact that it divides by 1 and itself are obvious. But even using all available mathematical and computational approaches, it takes an inordinate amount of time to tease out the two prime numbers used to create it.

So long as enough entropy is involved, larger primes and thus a larger product—at least 1024 bits long or over 300 digits in decimal, but, as we'll see, far better at 2048 or 4096 bits—result in better security. (The example above, only 23 bits long in binary, would be woefully inadequate, taking perhaps seconds to factor.)

Less entropy means that the same large prime numbers wind up being used repeatedly, which makes finding their factors simple for publicly reachable sites, routers, and other hardware.

What's wrong with RSA

The flaw in randomness that has been revealed has to do with the public part of public-key cryptography; RSA is a public-key algorithm and virtually all public Web servers rely on it. With the RSA algorithm, the two primes are used as part of a method that derives two related exponents—further multiplications of a number by itself. The exponents and the primes' product are used to encrypt and decrypt a message. The product of the primes and the public exponent may be distributed freely without compromising the private exponent (nor the two prime factors, for that matter, which aren't used again). A sender with a recipient's public key sends a message; the recipient uses the private key to reverse the process (using a multiplicative inverse operation, as I just learned).

The weakness found by two different sets of researchers doesn't relate to the algorithm. It's already known that sufficiently small products may be factored to their original primes through the best-known computational method. Products of 512 bits (150 decimal digits) or smaller are essentially broken; 768 bits is within reach, but only with substantial effort.

No, the problem isn't with a new method of speeding up factorization. Rather, as noted early, it's about entropy and the public nature of public keys. Web sites that use SSL/TLS for encrypting connections with clients publish a certificate that the client receives as part of an initial handshake to establish security. This certificate includes the public key information along with other identifiers and, in nearly all circumstances, a signature from a certificate authority (CA) to allow additional validation. (There are problems with certificate authorities' authoritativeness, too, but that's a different issue.)

These certificates from public Web sites may be freely retrieved. This is happening ever more frequently because the growing lack of trust in the security measures employed by CAs to keep SSL/TLS certificates from being generated for illegitimate parties has led to various projects scanning all public Web sites on a continuous basis to determine and alert users and site operators if the certificate for a given domain changes over time. This led the Electronic Frontier Foundation (EFF), for one, to pull down most recently 7.1 million certificates as part of its SSL Observatory. Another research group harvested millions of public certificates through their own efforts in a relatively fast and painless operation.

Once you have a large database of certificates, you can extract the public keys and look at the products. Researchers from the École Polytechnique Fédérale de Lausanne (Switzerland) and one independent collaborator released an academic paper on February 14th that examined both public keys in SSL/TLS certificates and PGP-style public keys used typically for person-to-person communications. The PGP keys revealed almost no problems; RSA keys in Web certificates, on the other hand, were very troublesome. The authors found that about 0.2% (2 in a 1,000) of the dominant 1024-bit RSA keys they checked in a first set and closer to 0.4% of a larger set with more recent data were vulnerable because two or more certificates shared a prime factor.

It's not impossible that the RSA key generation software would have "collisions," in which the same primes were generated on multiple occasions by different systems. But it should be far more unlikely. Thus, there is some flaw that prevents the degree of randomness necessary to ensure the least possible repetition. In private use, this wouldn't matter. But because public-key cryptography relies on publishing keys, such overlap may be found easily. (CAs don't typically generate the public/private key pair. System administrators generate an RSA key using software like openssl on a computer under the admin's control. That key is then bound up in a certificate signing request that's submitted to a CA to get its imprimatur.)

I'm not a mathematician, so I can't translate the specifics of how sets of different products sharing a single prime factor can result in a relatively easy extraction of the other prime. Once you have both primes, the calculation necessary to create the private key exponent may performed without any fuss. That eliminates all obfuscation from transactions conducted using that RSA key.

SSL/TLS and other methods of establishing secure network connections use public-key cryptography for the initial part of the setup to exchange a short symmetrical key used by both parties that's much faster to compute, and is used only for a session or part of a session. The public-key portion ensures that the session key isn't sniffed. But if the private key is known, the initial exchange of the session key may be decrypted through passive data sniffing, and thus the session key revealed. In some implementations, as EFF notes in a post on this research, learning the private key could allow decryption of previous encrypted communications if someone attempting to intercept such traffic retained records for later cracking.

The Lausanne researchers worried about disclosure, as their work could be easily replicated. In the end, they notified as many certificate owners with extractable primes as they could, based on information in the certificates and at Web sites at which the certifications are used. They found that to be quite spotty in practice. EFF is also engaged in notifying vulnerable parties.

The fallout

This is all troubling, but what's the upshot for practical threats?

First, the authors of the Lausanne paper found that 2048-bit RSA keys aren't invulnerable, but a literal handful (10 keys out of 3.2 million in the latest dataset examined) could be factored. That compares to about 20,000 out of 3.7 million 1024-bit keys. EFF's recent scan also shows that Web sites are migrating to 2048 bits, with a shift of nearly a quarter of all certificates moving from 1024 to 2048 bits from its August 2010 to its 2012 scan.

Second, for a broken certificate to be used to sniff data, an observer has to be on the wire, in a position between clients and the server. If you can break into a server to monitor traffic, you don't need to subvert SSL/TLS. You can grab the decrypted data on the server as it arrives. Being able to sniff at a major data center or Internet exchange where SSL/TLS is assumed to cover any risks would possibly be easier to arrange, but not trivial. Being able to be a man in the middle at the right point in the topology is largely relegated to corporate spies and government agencies.

Third, a separate group of researchers outed themselves on February 15th, as they had been engaged in a similar set of research. In a post by Nadia Heninger at Princeton's Freedom to Tinker blog, she explained we shouldn't get our IP panties in a wad over this. Her group, which has a forthcoming paper it will release after finishing disclosures to affected hardware manufacturers, found that the primary position for these weak public keys was in embedded devices, such as consumer routers and corporate firewalls. She suggests that embedded device makers have the most work to do.

This is the dilemma with all security and encryption issues. This is a problem—a serious problem! And it's not theoretical, but the practical impact appears to be small, and it's fixable in a reasonable amount of time without an insane amount of effort. But there's no way to know which popular sites used weak RSA keys unless someone replicates this work and publishes it openly. Nor can we know whether any data was extracted from sessions conducted with them. The odds seem to favor few sites and no data, but it's simply impossible to know.

You can ignore any hype about this problem. RSA, public-key cryptography, and SSL/TLS certificates aren't broken or even damaged. We just need a little more random acts of key generation (and an option when making keys to check them against a public database of identical prime products), and the problem will disappear.

0
Your rating: None

Before the Lights Go Out
is Maggie's new book about how our current energy systems work, and how we'll have to change them in the future. It comes out April 10th and is available for pre-order now. (E-book pre-orders coming soon!) Over the next couple of months, Maggie will be posting some energy-related stories based on things she learned while researching the book. This is one of them.

Steve_Saus submitterated this video that combines 14 years of weather radar images with a soothing piano concerto. It's a neat thing to watch a couple minutes of (though I'm not sure I needed to sit around for all 33 minutes of the video). It also reminded me of something really interesting that I learned about U.S. weather patterns and alternative energy.

Weather data, like the kind visualized here, can be collected, analyzed, and turned into algorithms that show us, in increasingly granular detail, what we can expect the weather to do in a specific part of the United States. Today, you can even break this information down to show what happens in one small part of a state compared to another small part. And that's important. As we increase our reliance on sources of energy that are based on weather patterns, this kind of information will become crucial to not only predicting how much power we can expect to get from a given wind farm, but also in deciding where to build that wind farm in the first place.

Take Texas as an example, which has the most installed wind power capacity of any U.S. state. That's great. Unfortunately, most of those wind farms are built in places where we can't use the full benefit of that wind power, because the wind peaks at night—just as electricity demand hits its low point. A simple change in location would make each wind turbine more useful, and make it a better investment.

It works like this ...

Wind patterns vary a lot from place to place and season to season, says Greg Polous, Ph.D., a meteorologist and director of V-Bar, LLC, a company that consults with energy companies about trends in wind patterns. In general, though, wind farms from Texas to North Dakota are subject to something called the Great Plains Low Level Jet.

This phenomenon happens because said Plains are flat. There's very few geographic features out there to impede the strong winds that blow through the region. During the day, heat rising off the ground causes turbulence and friction in the atmosphere above the Plains, slowing the wind down somewhat. But at night, that turbulence disappears, and the wind accelerates.

There are exceptions to this rule, however, and they are really interesting. If you build a wind farm out in far West Texas, you have to deal with the Great Plains Low Level Jet—hitting the peak in wind power and potential electric production at the same time the grid hits its nadir in electric demand. That's no good, because there's no storage on the electric grid. All that potential electric power the turbines could be producing at night simply goes to waste if nobody wants it.

But, if you build your wind farm on Texas' Gulf Coast, you don't have that problem. Instead, a coastal turbine would be subject to the Sea Breeze Effect, caused by differences in temperature between the air above the water and the air above the land. In those places, wind power—and electric generation—actually peaks on summer afternoons, right when demand for electricity is peaking, too.

Today, oil and gas companies spend a lot of time and money prospecting for new reserves of fuel. In the future, we'll prospect for wind and solar, too, using weather pattern data to spot the best sites where we get the most energy bang for our infrastructure buck.

Image: Mystery Photo, a Creative Commons Attribution (2.0) image from randa's photostream

0
Your rating: None

I thought this week could do with some more "fanboy", so cobbled together this blast of Every Apple Design Ever (ish) in 30 seconds. I'm a Sony guy, at heart, but even if each of its products were given only a single frame of animation, such a video would not end before the heat death of the universe. Also, times have changed.

Image and sourcing credits go to The Shrine of Apple, Apple-History, Edwin Tofslie, MacTracker, Ed Uthman, operating-system.org, and Apple itself.

BONUS FEATURE! After the jump, Every NeXT Design Ever in 30 seconds!

Image credits: NeXT and Alexander Schaelss

0
Your rating: None

image

Lockdown

The coming war on general-purpose computing

By Cory Doctorow - Share this article

Tweet



This article is based on a keynote speech to the Chaos Computer Congress in Berlin, Dec. 2011.

General-purpose computers are astounding. They're so astounding that our society still struggles to come to grips with them, what they're for, how to accommodate them, and how to cope with them. This brings us back to something you might be sick of reading about: copyright.

But bear with me, because this is about something more important. The shape of the copyright wars clues us into an upcoming fight over the destiny of the general-purpose computer itself.

In the beginning, we had packaged software and we had sneakernet. We had floppy disks in ziplock bags, in cardboard boxes, hung on pegs in shops, and sold like candy bars and magazines. They were eminently susceptible to duplication, were duplicated quickly, and widely, and this was to the great chagrin of people who made and sold software.

Enter Digital Rights Management in its most primitive forms: let's call it DRM 0.96. They introduced physical indicia which the software checked for—deliberate damage, dongles, hidden sectors—and challenge-response protocols that required possession of large, unwieldy manuals that were difficult to copy.

These failed for two reasons. First, they were commercially unpopular, because they reduced the usefulness of the software to the legitimate purchasers. Honest buyers resented the non-functionality of their backups, they hated the loss of scarce ports to the authentication dongles, and they chafed at the inconvenience of having to lug around large manuals when they wanted to run their software. Second, these didn't stop pirates, who found it trivial to patch the software and bypass authentication. People who took the software without paying for it were untouched.

Typically, the way this happened is a programmer, with possession of technology and expertise of equivalent sophistication to the software vendor itself, would reverse-engineer the software and circulate cracked versions. While this sounds highly specialized, it really wasn't. Figuring out what recalcitrant programs were doing and routing around media defects were core skills for computer programmers, especially in the era of fragile floppy disks and the rough-and-ready early days of software development. Anti-copying strategies only became more fraught as networks spread; once we had bulletin boards, online services, USENET newsgroups and mailing lists, the expertise of people who figured out how to defeat these authentication systems could be packaged up in software as little crack files. As network capacity increased, the cracked disk images or executables themselves could be spread on their own.

This gave us DRM 1.0. By 1996, it became clear to everyone in the halls of power that there was something important about to happen. We were about to have an information economy, whatever the Hell that was. They assumed it meant an economy where we bought and sold information. Information technology improves efficiency, so imagine the markets that an information economy would have! You could buy a book for a day, you could sell the right to watch the movie for a Euro, and then you could rent out the pause button for a penny per second. You could sell movies for one price in one country, at another price in another, and so on. The fantasies of those days were like a boring science fiction adaptation of the Old Testament Book of Numbers, a tedious enumeration of every permutation of things people do with information—and what might be charged for each.

Unfortunately for them, none of this would be possible unless they could control how people use their computers and the files we transfer to them. After all, it was easy to talk about selling someone a tune to download to their MP3 player, but not so easy to talk about the the right to move music from the player to another device. But how the Hell could you stop that once you'd given them the file? In order to do so, you needed to figure out how to stop computers from running certain programs and inspecting certain files and processes. For example, you could encrypt the file, and then require the user to run a program that only unlocked the file under certain circumstances.

But, as they say on the Internet, now you have two problems.

You must now also stop the user from saving the file while it's unencrypted—which must happen eventually— and you must stop the user from figuring out where the unlocking program stores its keys, enabling them to permanently decrypt the media and ditch the stupid player app entirely.

Now you have three problems: you must stop the users who figure out how to decrypt from sharing it with other users. Now you've got four problems, because you must stop the users who figure out how to extract secrets from unlocking programs from telling other users how to do it too. And now you've got five problems, because you must stop users who figure out how to extract these secrets from telling other users what the secrets were!

That's a lot of problems. But by 1996, we had a solution. We had the WIPO Copyright Treaty, passed by the United Nations World Intellectual Property Organization. This created laws that made it illegal to extract secrets from unlocking programs, and it created laws that made it illegal to extract media (such as songs and movies) from the unlocking programs while they were running. It created laws that made it illegal to tell people how to extract secrets from unlocking programs, and it created laws that made it illegal to host copyrighted works or the secrets. It also established a handy streamlined process that let you remove stuff from the Internet without having to screw around with lawyers, and judges, and all that crap.

And with that, illegal copying ended forever, the information economy blossomed into a beautiful flower that brought prosperity to the whole wide world; as they say on the aircraft carriers, "Mission Accomplished".

That's not how the story ends, of course, because pretty much anyone who understood computers and networks understood that these laws would create more problems than they could possibly solve. After all, these laws made it illegal to look inside your computer when it was running certain programs. They made it illegal to tell people what you found when you looked inside your computer, and they made it easy to censor material on the internet without having to prove that anything wrong had happened.

In short, they made unrealistic demands on reality and reality did not oblige them. Copying only got easier following the passage of these laws—copying will only ever get easier. Right now is as hard as copying will get. Your grandchildren will turn to you and say "Tell me again, Grandpa, about when it was hard to copy things in 2012, when you couldn't get a drive the size of your fingernail that could hold every song ever recorded, every movie ever made, every word ever spoken, every picture ever taken, everything, and transfer it in such a short period of time you didn't even notice it was doing it."

Reality asserts itself. Like the nursery rhyme lady who swallows a spider to catch a fly, and has to swallow a bird to catch the spider, and a cat to catch the bird, so must these regulations, which have broad general appeal but are disastrous in their implementation. Each regulation begets a new one, aimed at shoring up its own failures.

It's tempting to stop the story here and conclude that the problem is that lawmakers are either clueless or evil, or possibly evilly clueless. This is not a very satisfying place to go, because it's fundamentally a counsel of despair; it suggests that our problems cannot be solved for so long as stupidity and evilness are present in the halls of power, which is to say they will never be solved. But I have another theory about what's happened.

It's not that regulators don't understand information technology, because it should be possible to be a non-expert and still make a good law. MPs and Congressmen and so on are elected to represent districts and people, not disciplines and issues. We don't have a Member of Parliament for biochemistry, and we don't have a Senator from the great state of urban planning. And yet those people who are experts in policy and politics, not technical disciplines, still manage to pass good rules that make sense. That's because government relies on heuristics: rules of thumb about how to balance expert input from different sides of an issue.

Unfortunately, information technology confounds these heuristics—it kicks the crap out of them—in one important way.

The important tests of whether or not a regulation is fit for a purpose are first whether it will work, and second whether or not it will, in the course of doing its work, have effects on everything else. If I wanted Congress, Parliament, or the E.U. to regulate a wheel, it's unlikely I'd succeed. If I turned up, pointed out that bank robbers always make their escape on wheeled vehicles, and asked, "Can't we do something about this?", the answer would be "No". This is because we don't know how to make a wheel that is still generally useful for legitimate wheel applications, but useless to bad guys. We can all see that the general benefits of wheels are so profound that we'd be foolish to risk changing them in a foolish errand to stop bank robberies. Even if there were an epidemic of bank robberies—even if society were on the verge of collapse thanks to bank robberies—no-one would think that wheels were the right place to start solving our problems.

However, if I were to show up in that same body to say that I had absolute proof that hands-free phones were making cars dangerous, and I requested a law prohibiting hands-free phones in cars, the regulator might say "Yeah, I'd take your point, we'd do that."

We might disagree about whether or not this is a good idea, or whether or not my evidence made sense, but very few of us would say that once you take the hands-free phones out of the car, they stop being cars.

We understand that cars remain cars even if we remove features from them. Cars are special-purpose, at least in comparison to wheels, and all that the addition of a hands-free phone does is add one more feature to an already-specialized technology. There's a heuristic for this: special-purpose technologies are complex, and you can remove features from them without doing fundamental, disfiguring violence to their underlying utility.

This rule of thumb serves regulators well, by and large, but it is rendered null and void by the general-purpose computer and the general-purpose network—the PC and the Internet. If you think of computer software as a feature, a computer with spreadsheets running on it has a spreadsheet feature, and one that's running World of Warcraft has an MMORPG feature. The heuristic would lead you to think that a computer unable to run spreadsheets or games would be no more of an attack on computing than a ban on car-phones would be an attack on cars.

And, if you think of protocols and websites as features of the network, then saying "fix the Internet so that it doesn't run BitTorrent", or "fix the Internet so that thepiratebay.org no longer resolves," sounds a lot like "change the sound of busy signals," or "take that pizzeria on the corner off the phone network," and not like an attack on the fundamental principles of internetworking.

The rule of thumb works for cars, for houses, and for every other substantial area of technological regulation. Not realizing that it fails for the Internet does not make you evil, and it does not make you an ignoramus. It just makes you part of that vast majority of the world, for whom ideas like Turing completeness and end-to-end are meaningless.

So, our regulators go off, they blithely pass these laws, and they become part of the reality of our technological world. There are, suddenly, numbers that we aren't allowed to write down on the Internet, programs we're not allowed to publish, and all it takes to make legitimate material disappear from the Internet is the mere accusation of copyright infringement. It fails to attain the goal of the regulation, because it doesn't stop people from violating copyright, but it bears a kind of superficial resemblance to copyright enforcement—it satisfies the security syllogism: "something must be done, I am doing something, something has been done." As a result, any failures that arise can be blamed on the idea that the regulation doesn't go far enough, rather than the idea that it was flawed from the outset.

This kind of superficial resemblance and underlying divergence happens in other engineering contexts. I've a friend, who was once a senior executive at a big consumer packaged goods company, who told me what happened when the marketing department told the engineers that they'd thought up a great idea for detergent: from now on, they were going to make detergent that made your clothes newer every time you washed them!

After the engineers had tried unsuccessfully to convey the concept of entropy to the marketing department, they arrived at another solution: they'd develop a detergent that used enzymes that attacked loose fiber ends, the kind that you get with broken fibers that make your clothes look old. So every time you washed your clothes in the detergent, they would look newer. Unfortunately, that was because the detergent was digesting your clothes. Using it would literally cause your clothes to dissolve in the washing machine.

This was, needless to say, the opposite of making clothes newer. Instead, you were artificially aging them every time you washed them, and as the user, the more you deployed the "solution", the more drastic your measures had to be to keep your clothes up to date. Eventually, you would have to buy new clothes because the old ones fell apart.

Today we have marketing departments that say things such as "we don't need computers, we need appliances. Make me a computer that doesn't run every program, just a program that does this specialized task, like streaming audio, or routing packets, or playing Xbox games, and make sure it doesn't run programs that I haven't authorized that might undermine our profits."

On the surface, this seems like a reasonable idea: a program that does one specialized task. After all, we can put an electric motor in a blender, and we can install a motor in a dishwasher, and we don't worry if it's still possible to run a dishwashing program in a blender. But that's not what we do when we turn a computer into an appliance. We're not making a computer that runs only the "appliance" app; we're taking a computer that can run every program, then using a combination of rootkits, spyware, and code-signing to prevent the user from knowing which processes are running, from installing her own software, and from terminating processes that she doesn't want. In other words, an appliance is not a stripped-down computer—it is a fully functional computer with spyware on it out of the box.

We don't know how to build a general-purpose computer that is capable of running any program except for some program that we don't like, is prohibited by law, or which loses us money. The closest approximation that we have to this is a computer with spyware: a computer on which remote parties set policies without the computer user's knowledge, or over the objection of the computer's owner. Digital rights management always converges on malware.

In one famous incident—a gift to people who share this hypothesis—Sony loaded covert rootkit installers on 6 million audio CDs, which secretly executed programs that watched for attempts to read the sound files on CDs and terminated them. It also hid the rootkit's existence by causing the computer operating system's kernel to lie about which processes were running, and which files were present on the drive. But that's not the only example. Nintendo's 3DS opportunistically updates its firmware, and does an integrity check to make sure that you haven't altered the old firmware in any way. If it detects signs of tampering, it turns itself into a brick.

Human rights activists have raised alarms over U-EFI, the new PC bootloader, which restricts your computer so it only runs "signed" operating systems, noting that repressive governments will likely withhold signatures from operating systems unless they allow for covert surveillance operations.

On the network side, attempts to make a network that can't be used for copyright infringement always converge with the surveillance measures that we know from repressive governments. Consider SOPA, the U.S. Stop Online Piracy Act, which bans innocuous tools such as DNSSec—a security suite that authenticates domain name information— because they might be used to defeat DNS blocking measures. It blocks Tor, an online anonymity tool sponsored by the U.S. Naval Research Laboratory and used by dissidents in oppressive regimes, because it can be used to circumvent IP blocking measures.

In fact, the Motion Picture Association of America, a SOPA proponent, circulated a memo citing research that SOPA might work because it uses the same measures as are used in Syria, China, and Uzbekistan. It argued that because these measures are effective in those countries, they would work in America, too!

It may seem like SOPA is the endgame in a long fight over copyright and the Internet, and it may seem that if we defeat SOPA, we'll be well on our way to securing the freedom of PCs and networks. But as I said at the beginning of this talk, this isn't about copyright.

The copyright wars are just the beta version of a long coming war on computation. The entertainment industry is just the first belligerents to take up arms, and we tend to think of them as particularly successful. After all, here is SOPA, trembling on the verge of passage, ready to break the Internet on a fundamental level— all in the name of preserving Top 40 music, reality TV shows, and Ashton Kutcher movies.

But the reality is that copyright legislation gets as far as it does precisely because it's not taken seriously by politicians. This is why, on one hand, Canada has had Parliament after Parliament introduce one awful copyright bill after another, but on the other hand, Parliament after Parliament has failed to actually vote on each bill. It's why SOPA, a bill composed of pure stupid and pieced together molecule-by-molecule into a kind of "Stupidite 250" normally only found in the heart of newborn stars, had its rushed-through SOPA hearings adjourned midway through the Christmas break: so that lawmakers could get into a vicious national debate over an important issue, unemployment insurance.

It's why the World Intellectual Property Organization is gulled time and again into enacting crazed, pig-ignorant copyright proposals: because when the nations of the world send their U.N. missions to Geneva, they send water experts, not copyright experts. They send health experts, not copyright experts. They send agriculture experts, not copyright experts, because copyright is just not as important.

Canada's Parliament didn't vote on its copyright bills because, of all the things that Canada needs to do, fixing copyright ranks well below health emergencies on First Nations reservations, exploiting the oil patch in Alberta, interceding in sectarian resentments among French- and English-speakers, solving resources crises in the nation's fisheries, and a thousand other issues. The triviality of copyright tells you that when other sectors of the economy start to evince concerns about the Internet and the PC, copyright will be revealed for a minor skirmish—not a war.

Why might other sectors come to nurse grudges against computers in the way the entertainment business already has? The world we live in today is made of computers. We don't have cars anymore; we have computers we ride in. We don't have airplanes anymore; we have flying Solaris boxes attached to bucketfuls of industrial control systems. A 3D printer is not a device, it's a peripheral, and it only works connected to a computer. A radio is no longer a crystal: it's a general-purpose computer, running software. The grievances that arise from unauthorized copies of Snooki's Confessions of a Guidette are trivial when compared to the calls-to-action that our computer-embroidered reality will soon create.

Consider radio. Radio regulation until today was based on the idea that the properties of a radio are fixed at the time of manufacture, and can't be easily altered. You can't just flip a switch on your baby monitor and interfere with other signals. But powerful software-defined radios (SDRs) can change from baby monitor to emergency services dispatcher or air traffic controller, just by loading and executing different software. This is why the Federal Communications Commission (FCC) considered what would happen when we put SDRs in the field, and asked for comment on whether it should mandate that all software-defined radios should be embedded in "trusted computing" machines. Ultimately, the question is whether every PC should be locked, so that their programs could be strictly regulated by central authorities.

Even this is a shadow of what is to come. After all, this was the year in which we saw the debut of open source shape files for converting AR-15 rifles to full-automatic. This was the year of crowd-funded open-sourced hardware for genetic sequencing. And while 3D printing will give rise to plenty of trivial complaints, there will be judges in the American South and mullahs in Iran who will lose their minds over people in their jurisdictions printing out sex toys. The trajectory of 3D printing will raise real grievances, from solid-state meth labs to ceramic knives.

It doesn't take a science fiction writer to understand why regulators might be nervous about the user-modifiable firmware on self-driving cars, or limiting interoperability for aviation controllers, or the kind of thing you could do with bio-scale assemblers and sequencers. Imagine what will happen the day that Monsanto determines that it's really important to make sure that computers can't execute programs which cause specialized peripherals to output custom organisms which literally eat their lunch.

Regardless of whether you think these are real problems or hysterical fears, they are, nevertheless, the political currency of lobbies and interest groups far more influential than Hollywood and big content. Every one of them will arrive at the same place: "Can't you just make us a general-purpose computer that runs all the programs, except the ones that scare and anger us? Can't you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?"

There will be programs that run on general-purpose computers, and peripherals, that will freak even me out. So I can believe that people who advocate for limiting general-purpose computers will find a receptive audience. But just as we saw with the copyright wars, banning certain instructions, protocols or messages will be wholly ineffective as a means of prevention and remedy. As we saw in the copyright wars, all attempts at controlling PCs will converge on rootkits, and all attempts at controlling the Internet will converge on surveillance and censorship. This stuff matters because we've spent the last decade sending our best players out to fight what we thought was the final boss at the end of the game, but it turns out it's just been an end-level guardian. The stakes are only going to get higher.

As a member of the Walkman generation, I have made peace with the fact that I will require a hearing aid long before I die. It won't be a hearing aid, though; it will really be a computer. So when I get into a car—a computer that I put my body into—with my hearing aid—a computer I put inside my body—I want to know that these technologies are not designed to keep secrets from me, or to prevent me from terminating processes on them that work against my interests.

Last year, the Lower Merion School District, in a middle-class, affluent suburb of Philadelphia, found itself in a great deal of trouble. It was caught distributing, to its students, rootkitted laptops that allowed remote covert surveillance through the computer's camera and network connection. They photographed students thousands of times, at home and at school, awake and asleep, dressed and naked. Meanwhile, the latest generation of lawful intercept technology can covertly operate cameras, microphones, and GPS tranceivers on PCs, tablets, and mobile devices.

We haven't lost yet, but we have to win the copyright war first if we want to keep the Internet and the PC free and open. Freedom in the future will require us to have the capacity to monitor our devices and set meaningful policies for them; to examine and terminate the software processes that runs on them; and to maintain them as honest servants to our will, not as traitors and spies working for criminals, thugs, and control freaks.

0
Your rating: None