Skip navigation
Help

digital video

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Cory Doctorow

Journeyman Pictures' short documentary "Naked Citizens" is an absolutely terrifying and amazing must-see glimpse of the modern security state, and the ways in which it automatically ascribes guilt to people based on algorithmic inferences, and, having done so, conducts such far-reaching surveillance into its victims' lives that the lack of anything incriminating is treated of proof of being a criminal mastermind:

"I woke up to pounding on my door", says Andrej Holm, a sociologist from the Humboldt University. In what felt like a scene from a movie, he was taken from his Berlin home by armed men after a systematic monitoring of his academic research deemed him the probable leader of a militant group. After 30 days in solitary confinement, he was released without charges. Across Western Europe and the USA, surveillance of civilians has become a major business. With one camera for every 14 people in London and drones being used by police to track individuals, the threat of living in a Big Brother state is becoming a reality. At an annual conference of hackers, keynote speaker Jacob Appelbaum asserts, "to be free of suspicion is the most important right to be truly free". But with most people having a limited understanding of this world of cyber surveillance and how to protect ourselves, are our basic freedoms already being lost?

World - Naked Citizens (Thanks, Dan!)     

0
Your rating: None

Lately I've been trying to rid my life of as many physical artifacts as possible. I'm with Merlin Mann on CDs:

Can't believe how quickly CDs went from something I hate storing to something I hate buying to something I hate merely existing.

Although I'd extend that line of thinking to DVDs as well. The death of physical media has some definite downsides, but after owning certain movies once on VHS, then on DVD, and finally on Blu-Ray, I think I'm now at peace with the idea of not owning any physical media ever again, if I can help it.

My current strategy of wishing my physical media collection into a cornfield involves shipping all our DVDs to Second Spin via media mail, and paying our nephew $1 per CD to rip our CD collection using Exact Audio Copy and LAME as a summer project. The point of this exercise is absolutely not piracy; I have no interest in keeping both digital and physical copies of the media I paid for the privilege of owningtemporarily licensing. Note that I didn't bother ripping any of the DVDs because I hardly ever watched them; mostly they just collected dust. But I continue to love music and listen to my music collection on a daily basis. I'll donate all the ripped CDs to some charity or library, and if I can't pull that off, I'll just destroy them outright. Stupid atoms!

CDs, unlike DVDs or even Blu-Rays, are considered reference quality. That is, the uncompressed digital audio data contained on a CD is a nearly perfect representation of the original studio master, for most reasonable people's interpretation of "perfect", at least back in 1980. So if you paid for a CD, you might be worried that ripping it to a compressed digital audio format would result in an inferior listening experience.

I'm not exactly an audiophile, but I like to think I have pretty good ears. I've recommended buying $200+ headphones and headphone amps for quite a while now. By the way: still a good investment! Go do it! Anyhow, previous research and my own experiments led me to write Getting the Best Bang for Your Byte seven years ago. I concluded that nobody could really hear the difference between a raw CD track and an MP3 using a decent encoder at a variable bit rate averaging around 160kbps. Any bit rate higher than that was just wasting space on your device and your bandwidth for no rational reason. So-called "high resolution audio" was recently thoroughly debunked for very similar reasons.

Articles last month revealed that musician Neil Young and Apple's Steve Jobs discussed offering digital music downloads of 'uncompromised studio quality'. Much of the press and user commentary was particularly enthusiastic about the prospect of uncompressed 24 bit 192kHz downloads. 24/192 featured prominently in my own conversations with Mr. Young's group several months ago.

Unfortunately, there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48, and it takes up 6 times the space.

There are a few real problems with the audio quality and 'experience' of digitally distributed music today. 24/192 solves none of them. While everyone fixates on 24/192 as a magic bullet, we're not going to see any actual improvement.

The authors of LAME must have agreed with me, because the typical, standard, recommended, default way of encoding any old audio input to MP3 …

lame --preset standard "cd-track-raw.wav" "cd-track-encoded.mp3"

… now produces variable bit rate MP3 tracks at a bitrate of around 192kbps on average.

Encspot-omigod-disc-3

(Going down one level to the "medium" preset produces nearly exactly 160kbps average, my 2005 recommendation on the nose.)

Encoders have only gotten better since the good old days of 2005. Given the many orders of magnitude improvement in performance and storage since then, I'm totally comfortable with throwing an additional 32kbps in there, going from 160kbps average to 192kbps average just to be totally safe. That's still a miniscule file size compared to the enormous amount of data required for mythical, aurally perfect raw audio. For a particular 4 minute and 56 second music track, that'd be:

Uncompressed raw CD format51 mb
Lossless FLAC compression36 mb
LAME insane encoded MP3 (320kbps)11.6 mb
LAME standard encoded MP3 (192kbps avg)7.1 mb

Ripping to uncompressed audio is a non-starter. I don't care how much of an ultra audio quality nerd you are, spending 7× or 5× the bandwidth and storage for completely inaudible "quality" improvements is a dagger directly in the heart of this efficiency-loving nerd, at least. Maybe if you're planning to do a lot of remixing and manipulation it might make sense to retain the raw source audio, but for typical listening, never.

The difference between the 320kbps track and the 192kbps track is more rational to argue about. But it's still 1.6 times the size. Yes, we have tons more bandwidth and storage and power today, but storage space on your mobile device will never be free, nor will bandwidth or storage in the cloud, where I think most of this stuff should ultimately reside. And all other things being equal, wouldn't you rather be able to fit 200 songs on your device instead of 100? Wouldn't you rather be able to download 10 tracks in the same time instead of 5? Efficiency, that's where it's at. Particularly when people with dog's ears wouldn't even be able to hear the difference.

But Wait, I Have Dog Ears

Of course you do. On the Internet, nobody knows you're a dog. Personally, I think you're a human being full of crap, but let's drop some science on this and see if you can prove it.

On-the-internet-nobody-knows-youre-a-dog

When someone tells me "Dudes, come on, let's steer clear of the worst song ever written!", I say challenge accepted. Behold The Great MP3 Bitrate Experiment!

As proposed on our very own Audio and Video Production Stack Exchange, we're going to do a blind test of the same 2 minute excerpt of a particular rock audio track at a few different bitrates, ranging from 128kbps CBR MP3 all the way up to raw uncompressed CD audio. Each sample was encoded (if necessary), then exported to WAV so they all have the same file size. Can you tell the difference between any of these audio samples using just your ears?

1. Listen to each two minute audio sample

Limburger
Cheddar
Gouda
Brie
Feta

2. Rate each sample for encoding quality

Once you've given each audio sample a listen – with only your ears please, not analysis softwarefill out this brief form and rate each audio sample from 1 to 5 on encoding quality, where one represents worst and five represents flawless.

Yes, it would be better to use a variety of different audio samples, like SoundExpert does, but I don't have time to do that. Anyway, if the difference in encoding bitrate quality is as profound as certain vocal elements of the community would have you believe it is, that difference should be audible in any music track. To those who might argue that I am trolling audiophiles into listening to one of the worst-slash-best rock songs of all time … over and over and over … to prove a point … I say, how dare you impugn my honor in this manner, sir. How dare you!

I wasn't comfortable making my generous TypePad hosts suffer through the bandwidth demands of multiple 16 megabyte audio samples, so this was a fun opportunity to exercise my long dormant Amazon S3 account, and test out Amazon's on-demand CloudFront CDN. I hope I'm not rubbing any copyright holders the wrong way with this test; I just used a song excerpt for science, man! I'll pull the files entirely after a few weeks just to be sure.

You'll get no argument from me that the old standby of 128kbps constant bit rate encoding is not adequate for most music, even today, and you should be able to hear that in this test. But I also maintain that virtually nobody can reliably tell the difference between a 160kbps variable bit rate MP3 and the raw CD audio, much less 192kbps. If you'd like to prove me wrong, this is your big chance. Like the announcer in Smash TV, I say good luck – you're gonna need it.

So which is it – are you a dog or a man? Give the samples a listen, then rate them. I'll post the results of this experiment in a few days.

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!

0
Your rating: None

156px-Linus_Torvalds

The Millenium Technology Prize, awarded every two years, is a Finnish award designed “to improve the quality of life and to promote sustainable development-oriented research, development and innovation.” Sir Tim Berners-Lee won the prize in 2004. The finalists this year are Dr. Shinya Yamanaka, who has been contributing to the area of stem cell research, and Linux creator Linus Torvalds. The 2012 Grand Prize winner will be announced on June 13 in Helsinki, Finland.

From the press release:
In recognition of his creation of a new open source operating system kernel for computers leading to the widely used Linux operating system. The free availability of Linux on the Web swiftly caused a chain-reaction leading to further development and fine-tuning worth the equivalent of 73,000 man-years. Today millions use computers, smartphones and digital video recorders like Tivo run on Linux. Linus Torvalds’ achievements have had a great impact on shared software development, networking and the openness of the web, making it accessible for millions, if not billions.

I had the opportunity to ask Linus a few questions by email. Hopefully I didn’t simply create a nerd version of The Chris Farley Show.

Scott Merrill: You use a MacBook Air because you want a silent, quality computer. Why is it that Apple has the corner on this market? Have you considered using your fame or some portion of your fortune to try to remedy this?

Linus Torvalds: You *really* don’t want me to start designing hardware. Hey, I’m a good software engineer, but I’m not exactly known for my fashion sense. White socks and sandals don’t translate to “good design sense”

That said, I’m have to admit being a bit baffled by how nobody else seems to have done what Apple did with the Macbook Air – even several years after the first release, the other notebook vendors continue to push those ugly and *clunky* things. Yes, there are vendors that have tried to emulate it, but usually pretty badly. I don’t think I’m unusual in preferring my laptop to be thin and light.

Btw, even when it comes to Apple, it’s really just the Air that I think is special. The other apple laptops may be good-looking, but they are still the same old clunky hardware, just in a pretty dress.

I’m personally just hoping that I’m ahead of the curve in my strict requirement for “small and silent”. It’s not just laptops, btw – Intel sometimes gives me pre-release hardware, and the people inside Intel I work with have learnt that being whisper-quiet is one of my primary requirements for desktops too. I am sometimes surprised at what leaf-blowers some people seem to put up with under their desks.

I want my office to be quiet. The loudest thing in the room – by far – should be the occasional purring of the cat. And when I travel, I want to travel light. A notebook that weighs more than a kilo is simply not a good thing (yeah, I’m using the smaller 11″ macbook air, and I think weight could still be improved on, but at least it’s very close to the magical 1kg limit).

SM: I wasn’t so much asking why you haven’t designed your own hardware — I fully understand people playing to their own strengths. It’s taken considerable time for hardware manufacturers to recognize Linux as a viable platform, and today more and more OEMs are actively including or working toward Linux compatibility. Surely there’s an opportunity there for the global Linux community to influence laptop design for the betterment of everyone? I know it’s not your passion, and I respect that. Do you have any suggestions or guidance on ways we can collectively influence these kinds of things?

LT: I think one of the things that made Apple able to do this was how focused they’ve been able to stay. They really have rather few SKU’s compared to most big computer manufacturers, and I think that is what has allowed them to focus on those particular SKU’s and make them be better than the average machine out there.

Sure, they have *some* variation (different amounts of memory etc), but compare the Apple offerings to the wild and crazy world of HP or Lenovo or Toshiba. Other hardware manufacturers tend to not put all their eggs in a single (or a few) baskets, and even then they tend to hedge their bets and go for fairly safe and boring on most offerings (and then they sometimes make the mistake of going way crazy for the “designer” models to overcompensate for their boring bread-and-butter).

That kind of focus is quite impressive. It’s also often potentially unstable – I think most people still remember Apple’s rocky path. I used to think that Apple would go bankrupt not *that* long ago, and I’m sure I wasn’t the only one. And it can be hard to maintain in the long run, which is probably why most other companies don’t act that way – the companies who consistently try to revolutionize the world also consistently eventually fail.

So that kind of focus takes guts. I’m not an apple fan, because I think they’ve done some really bad things too, but I have to give them credit for not just having good designers, but the guts to go with it. Jobs clearly had a lot to do with that.

Anyway, I don’t think it’s worth worrying too much about laptops. The thing is, the Macbook Air was (and still to some degree is) ahead of its time. But I actually think that hardware is catching up to the point where doing good laptops really isn’t going to be rocket science any more. Rotational media really is going away, and with it goes one of the last formfactor issues: people really do not need (or want) that big spindle for a harddisk, or the silly spindle for an optical drive.

Sure, optical drives will remain in some form factors for a while, and others formfactors will remain bigger just because the manufacturer will want to continue to offer the capability of a rotational disk too – they’re still cheaper and have bigger capacities. But at the same time, *small* flash-based storage is really getting quite good, and while you still pay more for them, it’s not revolutionary any more. The mSATA/miniPCIe form factor is making it more and more realistic standard form-factor.

Together with CPU’s often being “fast enough” I would expect that the macbook air kind of formfactor becomes way more of a norm than it used to be. Apple was ahead of the curve, and I absolutely have higher expectations of the hardware I use than the average user probably does, but at the same time I’m convinced that the notebook market will finally get where I think it should be. Sure, some people will still want to use the big clunkers, but making a good thin-and-light machine is simply not going to be the technical expensive challenge it used to be.

In other words, we’ll take the whole Macbook Air formfactor for granted in a few years. It’s been done, it used to be pretty revolutionary, it’s going to be pretty standard.

It *did* take a lot longer than I thought it would take, admittedly. I’ve loved the thin-and-lights for much longer than the Macbook Air has existed. It’s not like Apple made up the concept – they just executed well on it.

What I in many ways think is more interesting are people who do new things. I love the whole Raspberry PI concept, for example. That’s revolutionary in a whole different direction – maybe not the prettiest form-factor, but taking advantage of how technology gets cheaper to really push the price down to the point where it’s really cheap. Sure, it’s a bit limited, but it’s pretty incredible what you can do for $35. Think about that with a few more years under its belt.

The reason I think that is interesting is because I think we’re getting to the point where it is *so* cheap to put a traditional computer together, that you can really start using that as a platform for doing whole new things. Sure, it’s good for teaching people, but the *real* magic is if one of those people who get one of those things comes up with something really new and fun to do with it.

Fairly cheap home computing was what changed my life. I wouldn’t worry about how to incrementally improve laptop design: I think it’s interesting to see what might *totally* change when you have dirt cheap almost throw-away computing that you can use to put a real computer inside some random toy or embedded device. What does that do to the embedded development world when things like that are really widely available?

SM: You don’t pull any punches when communicating with kernel developers and patch submitters. Has this tactic helped or hindered your success as a father?

LT: I really don’t know. I think the kids have grown up really well, and I don’t think it hurt them that we had rules in the family that were fairly strictly enforced (usually with a five-minute timeout in the bathroom). We had a very strict “no whining” rule, for example, and I’ve seen kids that should definitely have been brought up with a couple of rules like that.

That said, maybe they’re just naturally good kids. I don’t remember the last time I sent them to the bathroom (but it’s still a joke in our family: “If you don’t behave, you’ll spend the rest of the day in the bathroom”)

And while I do work from home, I am *not* a “father” when I work. The kids always knew that if they came in and disturbed me while I was at the computer, they’d get shouted at. I know some people who say that they could never work from home because they’d be constantly distracted by their kids – that is just not the case in our family. So despite me working from home, we’re a very “traditional” family – Tove stayed at home and was really the homemaker and took care of the kids.

And don’t get me wrong: when I interact with kernel developers, there can be a lot of swearing involved. And while that may *occasionally* happen with the kids too, the kids get hugs and good-night kisses too. Kernel developers? Not so much.

Would some kernel people prefer getting tucked in at night instead of being cursed at? I’m sure that would be appreciated. I don’t think I have it in me, though.

SM: How does your family feel about what you do for a living? What questions did/do they ask?

LT: They’ve never seen anything else, so I doubt they even think about it. It’s just what dad does. None of my three daughters have so far shown any actual interest in computers (outside of being pure users – they game, they chat, they do the facebook thing), and while they end up using Linux for all of that they don’t seem to think it’s all that strange.

SM: Do you try to get involved with technology problem solving in your every day life, for example at your kids’ school? If so, how has that been received?

LT: Oh, the absolute *last* thing I want to do is be seen as a support person. No way.

Sure, I do maintain the computers in the house, and it obviously means that the kids laptops (that they use in school too) run Linux, but it turns out that the local school district has had some Linux use in their computer labs anyway, so that never even made them look all that different.

But I’m simply not really organized enough to be a good MIS person. And frankly, I lack the interest. I find the low-level details of how computers work really interesting, but if I had to care about user problems and people forgetting their passwords or messing up their backups, I don’t know what I’d do. I’d probably turn to drugs and alcohol to dull the pain.

Even in the kernel project, I’m really happy that I’m not a traditional manager. I don’t have to manage logistics and people, I can worry purely about the technical side. So while I don’t do all that much programming any more (I spend most of my day merging code others wrote), I also don’t think of myself as a “manager”, I tend to call myself a “technical lead person” instead.

SM: What do you want to tell people that no one has ever bothered to ask you?

LT: The thing is, I don’t have a “message” to people. I never really did. I did (and do) Linux because it’s fun and interesting, and I really also enjoy the social aspect of developing things in the open, but I really don’t have anything I want to tell people.

SM: I apologize for not making this question more clear. I’m not asking if you have a message or anthem or anything like that. As a celebrity, you’ve conducted lots of interviews. Many of them have been formulaic, and there’s only so many times you can receive the same questions before rolling your eyes in exasperation.

Is there any question you wish you’d've been asked in an interview? Whether it’s because you’ve got the perfect / clever / whatever answer prepared, or just because you’d welcome the novelty of it? If so, what would have been your answer?

LT: Hmm. Some of the interviews I’ve enjoyed the most have been from somewhat antagonistic people who came from a non-computer background. I remember this russian journalist (back when I lived in Helsinki), who was writing a piece for some russian financial newspaper. He really was pretty aggressive, and being Russian from after the fall of the soviet union he had an almost unhealthy admiration for Microsoft and making lots of money, and capitalism. I’m sure it was heightened by the whole admiration for wall street etc that must run in the blood of most financial journalists to begin with.

That made for an interesting interview – because I like arguing. Explaining to a person like that why open source works, and in fact works better than the model he so clearly idolized was interesting. I don’t think I necessarily convinced him, but it still made for a memorable interview.

But any particular question? No. That’s not what I tend to find interesting – I enjoy the process, and the argument, and the flow of ideas of an interview, I don’t think there’s a “perfect question”, much less a “perfect answer that I wish somebody had asked me the question for”. So you’re not asking for something that I think I have.

But to expand on that, and to perhaps give you something of an answer anyway: this is very much true for me in software development too. I like the *process*. I like writing software. I like trying to make things work better. In many ways, the end result is unimportant – it’s really just the excuse for the whole experience. It’s why I started Linux to begin with – sure, I kind of needed an OS, but I needed a *project* to work on more than I needed the OS.

In fact, to get a bit “meta” on this issue, what’s even more interesting than improving a piece of software, is to improve the *way* we write and improve software. Changing the process of making software has sometimes been some of the most painful parts of software development (because we so easily get used to certain models), but that has also often been the most rewarding parts. It is, after all, why “git” came to be, for example. And I think open source in general is obviously just another “process model” change that I think is very successful.

So my model is kind of a reverse “end result justifies the means”. Hell no, that’s the stupidest saying in the history of man, and I’m not even saying that because it has been used to make excuses for bad behavior. No, it’s the worst possible kind of saying because it totally misses the point of everything.

It’s simply not the end that matters at all. It’s the means – the journey. The end result is almost meaningless. If you do things the right way, the end result *will* be fine too, but the real enjoyment is in the doing, not in the result.

And I’m still really happy to be “doing” 20 years later, with not an end in sight.

SM: Looking back over the history of Linux, do you have any “Oh man, I can’t believe I did/said that” reactions? (Note: this is not in respect to code strictly, but engineering or policy decisions)

LT: Engineering decisions usually aren’t a problem. Sure, I’ve made the wrong decision many times, but usually there was some good reason for it at the time – and the important part about engineering decisions is that you can fix them later when you realize they were wrong. So the “oh, that was spectacularly wrong” happens all the time, but the more spectacular it is, the quicker we notice, and that means that we fix it quickly too.

The one really memorable “Oh sh*t” moment was literally very early on in Linux development, when I realized that I had auto-dialed my main harddisk when I *meant* to auto-dial the university dial-in lines over the modem. And in the process wiped out my then Minix setup by writing AT-commands to the disk that understandably didn’t respond the way the autodialling script expected (“AT commands” is just the traditional Hayes modem control instruction set).

That’s the point where I ended up switching over to Linux entirely, so it was actually a big deal for Linux development. But that was back in 1991.

SM: If you could give an award to someone, who would be the recipient, and for what accomplishment?

LT: Hey, while I am a computer guy, my heroes are still “real scientists”. So if I can pick anybody, I think I’d pick Richard Dawkins for just being such an outspoken critic of muddled thinking and anti-scientific thought.

SM: The Millennium Technology Prize ceremony is on June 13, which happens to be my birthday. Any chance I can be your +1 to the party?

LT: Scott, I never knew you felt that way. I think my wife would not approve.

SM: Nor would mine, but you miss all the shots you don’t take!

SM: What are the major Linux distributions doing right, in general, and where are they falling short? Your recent Google+ rant about OpenSUSE’s security stance sheds some light on this, but I’d like to know more. Are formalized distributions a necessary evil? How much (if any) influence do you have with the distributions?

LT: So I absolutely *love* the distributions, because they are doing all the things that I’m not interested in, and even very early on they started being a big support for the kernel, and driving all the things that most technical people (including very much me) didn’t tend to be interested in: ease of use, internationalization, nice packaging, just making things a good “experience”.

So I think distributions have been very instrumental in making Linux successful, and that whole thing started happening very early on (some of the first distributions started happening early 92 – on floppy disks).

So they aren’t even a “necessary evil” – they are a “necessary good”. They’ve been very instrumental in making Linux be what it is, both on a technical side, but *especially* on a ease of use and approachability side.

That said, exactly because they are so important, it does frustrate me when I hit things that I perceive to be steps backwards. The SuSE rant was about asking a non-technical user about a password that the non-technical user had absolutely no reason to even know, in a situation where it made no sense. That kind of senseless user hostility is something that we’ve generally come away from (and some kernel people tend to dismiss Ubuntu, but I really think that Ubuntu has generally had the right approach, and been very user-centric).

The same thing is what frustrated me about many of the changes in Gnome 3. The whole “let’s make it clutter-free” was taken to the point where it was actually hard to get things done, and it wasn’t even obvious *how* to do things when you could do them. That kind of minimalist approach is not forward progress, it’s just UI people telling people “we know better”, even if it makes things harder to do. That kind of “things that used to be easy are suddenly hard or impossible” just drives me up the wall, and frustrates me.

As to my own influence: it really goes the other way. The distributions have huge influences on the kernel, and not only in the form of employing a lot of the engineers. I actively look to the distributions to see which parts of the kernel get used, and often when people suggest new features, one of the things that really clinches it for me is if a manager for some distribution speaks up and says “we’re already using that, because we needed it for xyz”.

Sure, I end up influencing them through what I merge, and how it’s done, but at the same time I really do see the distributions as one of the first users of the kernel, and the whole way we do releases (based on time, not features) is partly because that way distributions can plan ahead sanely. They know the release schedule to within a week or two, and we try very hard to be reliable and not do crazy things.

We have a very strict “no regressions” rule, for example, and a large part of that rule is so that people – very much including the people involved in distributions – don’t need to fear upgrades. If it used to work a certain way, we try very hard to make sure it continues to work that way. Sure, bugs happen, and some change may not be noticed in time, but on the whole I think a big part of kernel development is to try to make it as painless as possible for people to upgrade smoothly.

Because if you make upgrades painful, it just means that people will stay back.

SM: You’ve been doing this for 20 years. What do you think of the newest crop of kernel contributors? Do you see any rising stars? Do you see any positive or worrisome trends with respect to the kind and caliber of contribution from younger developers?

LT: I’m very happy that we still have a very wide developer base, and we continue to see more than a thousand different people for each release (which is roughly every three months or so). A lot of those contributions come from people who make just tiny one-liner changes, and some of them are never heard from again once they got their one small fix done, but on the other hand, the small one-liner changes is how many others gets started.

That said, one of the things that *has* changed a lot in the 20 years is that we certainly have a lot more “process” in place. Most of those one-liners didn’t get to me directly – many of them came through multiple layers of submaintainers etc. By the time I see most “rising stars” they’ve already been doing smaller changes for a long time.

The one worrisome trend is pretty much inevitable: the kernel *is* getting big, and a lot of the core code is quite complex and sometimes hard to really wrap your head around. Core areas like the VM subsystem or the core VFS layer simply are not easy to get into for a new developer. That makes it a bit harder to get started if that’s what you are interested in – the bar has simply been raised from where it was ten or fifteen years ago.

At the same time, I do think it’s still fairly easy to get involved, you may just have to start in a less central place. Most kernel people start off worrying about one particular driver or platform, and “grow” from there. We do seem to have quite a lot of developers, and I’ve talked to open source project maintainers that are very envious of just how many people we have involved in the kernel.

SM: You’ve said that it’s the technical challenge that keeps you involved and motivated. Surely there are plenty of technical challenges in the world. Why stick with the kernel?

LT: I think it’s partly because I’m the kind of person who doesn’t flit from one project to another. I keep on doing Linux, because once I get started, I’m kind of obstinate that way.

But part of it is simply the reason I started doing a kernel in the first place – if what you are interested in is low-level interactions with hardware, a kernel is where it is all at. Sure, there are tons of technical challenges out there, but very few of them are as interesting as an operating system kernel if you are into that kind of low-level interaction between software and hardware.

SM: As the number of systems and architectures supported by the Linux kernel continues to grow, you can’t possibly have development hardware for each of them. How do you verify the quality and functionality of all the change requests you get?

LT: Oh, that’s easy: I don’t.

The whole model is built on a network of trust among developers that have come to know each other over the years. There’s no way I can test all the platforms we support – the same way there is no way I can check every single commit that gets merged through me. And I wouldn’t even really even *want* to check each hardware or each change – the point of open source and distributed development is that you do things together. We have a few tens of “highlevel” maintainers for various subsystems (eg networking, USB drivers, graphics, particular hardware architectures etc etc), and even those maintainers can’t test everything in their area, because they won’t have that particular hardware etc. I trust them, and they in turn trust the people they work with.

I think any big project is about finding people you can trust, and really then depending on that trust. I don’t *want* to micro-manage people, and I couldn’t afford to even if I did want to.

And the thing is, smart people (and people who have what I call “good taste”, which is often even more important) may be rare, but you do recognize them. I think one of my biggest successes is actually outside Linux: recognizing how good a developer Junio Hamano was on git, and trusting him enough to just ask if he would be willing to maintain the project. Being able to let go and trusting somebody else is *important*, because without that kind of trust you can’t get big projects done.

What will Linus do with the prize money, if he wins? “I guess I won’t have to worry about the kids education any more,” he says.

Thanks, Linus, for taking the time to chat with me. And good luck! We hope you win the Millenium Technology Prize!

Photo credit: Wikipedia

0
Your rating: None


TEDxPhoenix - Nathan Barnatt - One Million Views for a Change

- About Nathan Barnatt - Nathan Barnatt is a character actor and physical comedian bordering on stuntman. Two of his influences are Buster Keaton and Andy Kaufman. He is a performer at the Upright Citizens Brigade theater in Los Angeles. Nathan has made a name for himself with his original brand of comedy, and he's quite the dancer. In his TEDxPhoenix 11.11.11 TEDxTalk, Nathan shares the inspiration behind his Que Veux-tu YouTube video (youtu.be and gives the audience a sneak peak of his newest video, Comme Un Enfant (youtu.be Website: www.nathanbarnatt.com On Twitter twitter.com - About TEDxPhoenix 11.11.11 - The theme for TEDxPhoenix 11.11.11 was "______ for a Change" and featured speakers from around the US who are exploring unique ideas that have brought about unexpected, interesting, and positive changes. Five hundred people from all over Arizona and the western US gathered at the Mesa Arts Center for an evening of thought provoking and entertaining talks. The opening title (vimeo.com and speaker video intros were created under the design and direction of TEDxPhoenix Art Director, Safwat Saleem (@safwat). Motion graphics and animation provided by TEDxPhoenix volunteer Qa'ed Tung (@qaedtung). Video editing of this TEDxPhoenix TEDxTalk was done by UAT Digital Video student, Neil Sparks, under the guidance of UATDV Program Champion, Professor Paul DeNigris (@UATDV). Filming of TEDxPhoenix 2011 was conducted by UAT Digital Video students Dylan White, Kennedy Gray, Ty <b>...</b>
From:
TEDxTalks
Views:
2465

111
ratings
Time:
07:36
More in
Nonprofits & Activism

0
Your rating: None

The Matrix compressed

Choice of color in a movie can say a lot about what's going on in a scene. It sets the mood, changes the tone, indicates a change in point of view, so on and so forth, which is why moviebarcode is so fun to click through. The concept is simple. Take every frame in a movie and compress it into a sliver, and put them next to each other. Voilá. Movie barcode.

The above is The Matrix, making it obvious when they're in and out of the system. Below are Kill Bill and The Social Network, respectively.

See dozens more on the moviebarcode tumblr, which is also selling these as prints.

[moviebarcode via @mslima]

0
Your rating: None