Skip navigation
Help

BitTorrent

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

It was a late night in May. Renderman, the computer hacker notorious for discovering that outdated air traffic control software could be used to reroute planes mid-flight, was feeling shitty. The stress of digging himself out of debt he’d accumulated during years of underemployment was compounded by the feeling of being trapped in a job he hated. He was forgetful and couldn’t focus on anything. “Depression has sapped my motivation and lust for life,” he later wrote. “I can't remember the last time I worked on a project ... it's like I'm a ghost in my own life. Just existing but with no form ... I’m most definitely not myself.”

Feeling slightly buzzed after a few beers, he decided to speak out. “My name is Renderman and I suffer from depression,” he tweeted.

Within minutes, other hackers started responding.

0
Your rating: None

Btlive_large

BitTorrent today unveiled BitTorrent Live, a live streaming service designed to "eliminate barriers" for broadcasting by cutting down on infrastructure and other backend costs. The new protocol (currently in beta) accomplishes this by relying on the same peer-to-peer technology BitTorrent uses to transfer large files across the internet. Company founder Bram Cohen has spent three years working on the project, according to a recent interview with TechCrunch. Rather than introducing a middleman, BitTorrent Live forges a direct connection between broadcasters and their viewers, transforming each person tuning in into "a miniature broadcaster." Cohen says the broadcast delay is typically just five seconds. This "more efficient" method of...

Continue reading…

0
Your rating: None


Peter Biddle speaks at the ETech conference in 2007.

Scott Beale

Can digital rights management technology stop the unauthorized spread of copyrighted content? Ten years ago this month, four engineers argued that it can't, forever changing how the world thinks about piracy. Their paper, "The Darknet and the Future of Content Distribution" (available as a .doc here) was presented at a security conference in Washington, DC, on November 18, 2002.

By itself, the paper's clever and provocative argument likely would have earned it a broad readership. But the really remarkable thing about the paper is who wrote it: four engineers at Microsoft whose work many expected to be at the foundation of Microsoft's future DRM schemes. The paper's lead author told Ars that the paper's pessimistic view of Hollywood's beloved copy protection schemes almost got him fired. But ten years later, its predictions have proved impressively accurate.

The paper predicted that as information technology gets more powerful, it will grow easier and easier for people to share information with each other. Over time, people will assemble themselves into what the authors called the "darknet." The term encompasses formal peer-to-peer networks such as Napster and BitTorrent, but it also includes other modes of sharing, such as swapping files over a local area network or exchanging USB thumb drives loaded with files.

Read 18 remaining paragraphs | Comments

0
Your rating: None

An anonymous reader writes "Just Cause 2 Multiplayer has been getting a lot of press lately, but this making-of feature points out how the mod raises serious questions about the games industry: if 1,800-player massively multiplayer action games are possible on one server, why did it take a group of modders to prove it? From the article: 'There’s more chaos to come. That 1,800 player limit isn’t maxing out the server or the software by any means. Foote says that the team, who first met online seven years ago playing the similar Multi Theft Auto GTA mod, are "yet to reach any real barrier or limitation preventing us from reaching an even higher player count than the previous public tests." When it’s ready, the team will release the software for everyone to download and run their own servers, wherever they are in the world.'"


Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

About a year and a half ago, I researched the state of routers: about as unsexy as it gets but essential to the stability, reliability, and security of your Internet connection. My conclusion?

This is boring old plain vanilla commodity router hardware, but when combined with an open source firmware, it is a massive improvement over my three year old, proprietary high(ish) end router. The magic router formula these days is a combination of commodity hardware and open-source firmware. I'm so enamored of this one-two punch combo, in fact, I might even say it represents the future. Not just of the everyday workhorse routers we all need to access the Internet – but the future of all commodity hardware.

I felt a little bad about that post, because I quickly migrated from the DD-WRT open source firmware to OpenWRT and then finally settled on Tomato. I guess that's open source, too many choices with nobody to really tell you what's going to work reliably on your particular hardware. But the good news is that I've been running Tomato quite happily with total stability for about a year now – primarily because it is gloriously simple, but also because it has the most functional quality of service (QoS) implementation.

Tomato-qos

Why does functional Quality of Service matter so very much in a router? Unless you have an Internet connection that's only used by your grandmother to visit her church's website on Sundays, QoS is the difference between a responsive Internet and one that's brutally dog slow.

Ever sat in an internet shop, a hotel room or lobby, a local hotspot, and wondered why you can't access your email? Unknown to you, the guy in the next room or at the next table is hogging the internet bandwidth to download the Lord Of The Rings Special Extended Edition in 1080p HDTV format. You're screwed - because the hotspot router does not have an effective QoS system. In fact, I haven't come across a shop or an apartment block locally that has any QoS system in use at all. Most residents are not particularly happy with the service they [usually] pay for.

When I switched from DD-WRT and OpenWRT to Tomato, I had to buy a different router, because Tomato only supports certain router hardware, primarily Broadcom. The almost universal recommendation was the Asus RT-N16, so that's what I went with.


Asus RT-N16

And it is still an excellent choice. If you just want a modern, workhorse single band wireless N router that won't break the bank, but has plenty of power and memory to run Tomato, definitely try the Asus RT-N16. It's currently available for under $80 (after $10 rebate). Once you get Tomato on there, you've got a fine combination of hardware and software. Take it from this SmallNetBuilder user review:

I'm a semigeek. Some of the stuff on this site confuses me. But I figured out enough to get this router and install Tomato USB. Great combination. Have not had any problems with the router. Love all the features that Tomato gives me. Like blocking my son's iPod after 7 PM. Blocking certain websites. Yeah, I know you can do that with other routers but Tomato made it easy. Also love the QoS features. Netflix devices get highest bandwidth while my wife's bittorrent gets low.

Review was too heavily slanted against the Asus software, which I agree is crap. I bought the router for its hardware specs. Large memory. Fast processor. Gigabyte lan. 2 USB ports.

What's not to love? Well, the dual band thing, mainly. If you want a truly top of the line router with incredible range, and simultaneous dual band 2.4 GHz and 5 GHz performance bragging rights, fortunately there's the Asus RT-N66U.

Asus RT-N66U

This is, currently at least, the state of the art in routers. It has a faster CPU and twice the memory (256 MB) of the RT-N16. But at $190 it is also over twice the price. Judge for yourself in the SmallNetBuilder review:

As good as the RT-66U is, our wireless performance results once again show that no router is good in every mode that we test. But that said, the Dark Knight clearly outperformed both the NETGEAR WNDR4500 and Cisco Linksys E4200V2 in most of our two and three-stream tests. And it's the only router in recent memory able to reach to our worst-case/lowest-signal test location on the 5 GHz band, albeit with barely-usable throughput. Still, this is an accomplishment in itself.

If you're going to spend close to $200 for a wireless router, you should get a lot for your money. The Dark Knight seems to deliver wireless performance to justify its high price and has routing speed fast enough to handle any service a consumer is likely to have, even our friends in Europe and Asia.

Its only weakness? Take a guess. Oh wait, no need to guess, it's the same "weakness" the RT-N16 shared, the sketchy Asus firmware it ships with out of the box. That's why we get our Tomato on, people! There is complete and mature support for the RT-N66U in Tomato; for a walkthrough on how to get it installed (don't be shy, it's not hard) Check out Shadow Andy's TomatoUSB firmware flashing guide.

Does having nice router hardware with a current open source firmware matter? Well, if your livelihood depends on the Internet like mine does, then I certainly think so.

Internet-serious-business

At the very least, if you or someone you love is also an Internet fan and hasn't given any particular thought to what router they use, maybe it's time to start checking into that. Now if you'll excuse me, I'm going to go donate to the Tomato project.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.

0
Your rating: None

Screen shot 2012-04-11 at 11.53.36 PM 2

Editor’s Note: TechCrunch columnist Semil Shah currently works at Votizen and is based in Palo Alto. You can follow him on Twitter @semil

“In the Studio” opens its doors this week to one of Silicon Valley’s most quietly active venture capitalists who, after years working in technology operations for major networking companies, a stint with an Asian telecom giant, and nearly a decade investing in mobile, gaming, digital media, and networking companies, is paying particular attention to the implications of big data and the potential opportunities they create.

For the past decade, Ping Li has been investing in across a broad range of technology companies with Accel Partners, where he is a general partner. Since their defining Series A investment in Facebook, the firm has been on a roll, opening offices in New York City and expanding its footprint overseas, all while maintaining their anchor in the middle of Palo Alto’s University Avenue. And, over the past few years, Accel has also developed an interest in “big data.”

The term “big data” is thrown around often in conversation or at tech conferences, but despite the generalizations and hype, significant opportunities exist for entrepreneurs and investors alike. Last year, I attempted to analyze how big data impacted the consumer web and concluded that while opportunities were abundant, very few were in a positions to capitalize on them given the scarcity of talent in these specific areas of the consumer web.

Li and his partners at Accel are certainly looking at big data as it applies to consumer products — the massive amounts of unstructured social data we are all generating through social media and applications, waiting to be harvested. On the enterprise side of things, however, Li believes big data is on the verge of going mainstream, where datasets and analytical tools will soon be available to everyone, igniting new waves of innovation that could disrupt major public companies from the platform all the way to the application layer.

In this conversation, Li shares his views on the big data landscape and also offers subtle advice to potential founders looking into the space. Having the benefit to see many big data technologies and applications over the past few years, he has developed a keen sense of what minefields founders need to look out for when creating these technologies. To take things a step further, Li and his partners at Accel launched a $100M Big Data Fund, invested in creating an ecosystem of academics, technologists, and thought-leaders, and are hosting a private conference at Stanford on May 9 on this topic (technologists working on big data who would like to attend can contact Accel directly through the conference site).

0
Your rating: None

The entrance to Facebook's campus

Photograph by Ryan Paul

Facebook is headquartered in Menlo Park, California at a site that used belong to Sun Microsystems. A large sign with Facebook's distinctive "like" symbol—a hand making the thumbs-up gesture—marks the entrance. When I arrived at the campus recently, a small knot of teenagers had congregated, snapping cell phone photos of one another in front of the sign.

Thanks to the film The Social Network, millions of people know the crazy story of Facebook's rise from dorm room project to second largest website in the world. But few know the equally intriguing story about the engine humming beneath the social network's hood: the sophisticated technical infrastructure that delivers an interactive Web experience to hundreds of millions of users every day.

I recently had a unique opportunity to visit Facebook headquarters and see that story in action. Facebook gave me an exclusive behind-the-scenes look at the process it uses to deploy new functionality. I watched first-hand as the company's release engineers rolled out the new "timeline" feature for brand pages.

Read more on Ars Technica…

0
Your rating: None