Skip navigation
Help

ethernet

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Soulskill

coop0030 writes "Feel like someone is snooping on you? Browse anonymously anywhere you go with the Onion Pi Tor proxy. This is fun weekend project from Adafruit that uses a Raspberry Pi, a USB WiFi adapter and Ethernet cable to create a small, low-power and portable privacy Pi."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None
Original author: 
Andrew Cunningham

Andrew Cunningham / Aurich Lawson

A desktop PC used to need a lot of different chips to make it work. You had the big parts: the CPU that executed most of your code and the GPU that rendered your pretty 3D graphics. But there were a lot of smaller bits too: a chip called the northbridge handled all communication between the CPU, GPU, and RAM, while the southbridge handled communication between the northbridge and other interfaces like USB or SATA. Separate controller chips for things like USB ports, Ethernet ports, and audio were also often required if this functionality wasn't already integrated into the southbridge itself.

As chip manufacturing processes have improved, it's now possible to cram more and more of these previously separate components into a single chip. This not only reduces system complexity, cost, and power consumption, but it also saves space, making it possible to fit a high-end computer from yesteryear into a smartphone that can fit in your pocket. It's these technological advancements that have given rise to the system-on-a-chip (SoC), one monolithic chip that's home to all of the major components that make these devices tick.

The fact that every one of these chips includes what is essentially an entire computer can make keeping track of an individual chip's features and performance quite time-consuming. To help you keep things straight, we've assembled this handy guide that will walk you through the basics of how an SoC is put together. It will also serve as a guide to most of the current (and future, where applicable) chips available from the big players making SoCs today: Apple, Qualcomm, Samsung, Nvidia, Texas Instruments, Intel, and AMD. There's simply too much to talk about to fit everything into one article of reasonable length, but if you've been wondering what makes a Snapdragon different from a Tegra, here's a start.

Read 56 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Arik Hesseldahl

cloud1Here’s a name I haven’t heard in a while: Anso Labs.

This was the cloud computing startup that originated at NASA, where the original ideas for OpenStack, the open source cloud computing platform, was born. Anso Labs was acquired by Rackspace a little more than two years ago.

It was a small team. But now a lot of the people who ran Anso Labs are back with a new outfit, still devoted to cloud computing, and still devoted to OpenStack. It’s called Nebula. And it builds a turnkey computer that will turn an ordinary rack of servers into a cloud-ready system, running — you guessed it — OpenStack.

Based in Mountain View, Calif., Nebula claims to have an answer for any company that has ever wanted to build its own private cloud system and not rely on outside vendors like Amazon or Hewlett-Packard or Rackspace to run it for them.

It’s called the Nebula One. And the setup is pretty simple, said Nebula CEO and founder Chris Kemp said: Plug the servers into the Nebula One, then you “turn it on and it boots up cloud.” All of the provisioning and management that a service provider would normally charge you for has been created on a hardware device. There are no services to buy, no consultants to pay to set it up. “Turn on the power switch, and an hour later you have a petascale cloud running on your premise,” Kemp told me.

The Nebula One sits at the top of a rack of servers; on its back are 48 Ethernet ports. It runs an operating system called Cosmos that grabs all the memory and storage and CPU capacity from every server in the rack and makes them part of the cloud. It doesn’t matter who made them — Dell, Hewlett-Packard or IBM.

Kemp named two customers: Genentech and Xerox’s research lab, PARC. There are more customer names coming, he says, and it already boasts investments from Kleiner Perkins, Highland Capital and Comcast Ventures. Nebula is also the only startup company that is a platinum member of the OpenStack Foundation. Others include IBM, HP, Rackspace, RedHat and AT&T.

If OpenStack becomes as easy to deploy as Kemp says it can be, a lot of companies — those that can afford to have their own data centers, anyway — are going to have their own clouds. And that is sort of the point.

0
Your rating: None

Aurich Lawson / Thinkstock

The corporate data center is undergoing a major transformation the likes of which haven't been seen since Intel-based servers started replacing mainframes decades ago. It isn't just the server platform: the entire infrastructure from top to bottom is seeing major changes as applications migrate to private and public clouds, networks get faster, and virtualization becomes the norm.

All of this means tomorrow's data center is going to look very different from today's. Processors, systems, and storage are getting better integrated, more virtualized, and more capable at making use of greater networking and Internet bandwidth. At the heart of these changes are major advances in networking. We're going to examine six specific trends driving the evolution of the next-generation data center and discover what both IT insiders and end-user departments outside of IT need to do to prepare for these changes.

Beyond 10Gb networks

Network connections are getting faster to be sure. Today it's common to find 10-gigabit Ethernet (GbE) connections to some large servers. But even 10GbE isn't fast enough for data centers that are heavily virtualized or handling large-scale streaming audio/video applications. As your population of virtual servers increases, you need faster networks to handle the higher information loads required to operate. Starting up a new virtual server might save you from buying a physical server, but it doesn't lessen the data traffic over the network—in fact, depending on how your virtualization infrastructure works, a virtual server can impact the network far more than a physical one. And as more audio and video applications are used by ordinary enterprises in common business situations, the file sizes balloon too. This results in multi-gigabyte files that can quickly fill up your pipes—even the big 10Gb internal pipes that make up your data center's LAN.

Read 34 remaining paragraphs | Comments

0
Your rating: None

I'm trying to figure out how long it would take to send 1000 bytes of data if you have a bandwidth of 1mbps, but I want to take headers and trailers into account. However, I don't know to do this without being explicitly told what the headers and trailers are. Or maybe my approach is just wrong?

Can anyone help out? I posted on r/learnprogramming but found this which seems to be more fitting. If there's somewhere better to go, please tell me. Thank you.

submitted by dogboatmanface
[link] [11 comments]

0
Your rating: None

GeForce Grid Nvidia stock 1024

Nvidia just finishing telling us about how it's going to stick a Kepler GPU in the cloud: now, CEO Jen-Hsun Huang is telling us how it will use distributed graphics to stream low-latency video games from the internet to computers that don't have one themselves. Nvidia's partnered with cloud gaming provider Gaikai, and claims that the GeForce Grid GPU has reduced latency of streaming games to just ten milliseconds by capturing and encoding game frames rapidly, and in a single pass, and promises that the enhanced Gaikai service will be available on TVs, tablets and smartphones running Android and iOS.

David Perry from streaming game company Gaikai is on stage to discuss and demo the technology now; Gaikai also announced that it's working...

Continue reading…

0
Your rating: None