Skip navigation
Help

Intel

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
TEDTalks

In our digital world, social relations have become mediated by data. Without even realizing it, we’re barricading ourselves against strangeness -- people and ideas that don't fit the patterns of who we already know, what we already like and where we’ve already been. A call for technology to deliver us to what and who we need, even if it’s unfamiliar. (Filmed at TED@Intel.)

0
Your rating: None
Original author: 
Casey Johnston


Pichai seems open to Android meaning lots of different things to lots of people and companies.

It Came from China

An interview with Sundar Pichai over at Wired has settled some questions about suspected Google plans, rivalries, and alliances. Pichai was recently announced as Andy Rubin’s replacement as head of Android, and he expressed cool confidence ahead of Google I/O about the company’s relationships with both Facebook and Samsung. He even felt good about the future of the spotty Android OS update situation.

Tensions between Google and Samsung, the overwhelmingly dominant Android handset manufacturer, are reportedly rising. But Pichai expressed nothing but goodwill toward the company. “We work with them on pretty much almost all our important products,” Pichai said while brandishing his own Samsung Galaxy S 4. “Samsung plays a critical role in helping Android be successful.”

Pichai noted in particular the need for companies that make “innovation in displays [and] in batteries” a priority. His attitude toward Motorola, which Google bought almost two years ago, was more nonchalant: “For the purposes of the Android ecosystem, Motorola is [just another] partner.”

Read 5 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Todd Hoff

Now that we have the C10K concurrent connection problem licked, how do we level up and support 10 million concurrent connections? Impossible you say. Nope, systems right now are delivering 10 million concurrent connections using techniques that are as radical as they may be unfamiliar.

To learn how it’s done we turn to Robert Graham, CEO of Errata Security, and his absolutely fantastic talk at Shmoocon 2013 called C10M Defending The Internet At Scale.

Robert has a brilliant way of framing the problem that I’ve never heard of before. He starts with a little bit of history, relating how Unix wasn’t originally designed to be a general server OS, it was designed to be a control system for a telephone network. It was the telephone network that actually transported the data so there was a clean separation between the control plane and the data plane. The problem is we now use Unix servers as part of the data plane, which we shouldn’t do at all. If we were designing a kernel for handling one application per server we would design it very differently than for a multi-user kernel. 

Which is why he says the key is to understand:

  • The kernel isn’t the solution. The kernel is the problem.

Which means:

  • Don’t let the kernel do all the heavy lifting. Take packet handling, memory management, and processor scheduling out of the kernel and put it into the application, where it can be done efficiently. Let Linux handle the control plane and let the the application handle the data plane.

The result will be a system that can handle 10 million concurrent connections with 200 clock cycles for packet handling and 1400 hundred clock cycles for application logic. As a main memory access costs 300 clock cycles it’s key to design in way that minimizes code and cache misses.

With a data plane oriented system you can process 10 million packets per second. With a control plane oriented system you only get 1 million packets per second.

If this seems extreme keep in mind the old saying: scalability is specialization. To do something great you can’t outsource performance to the OS. You have to do it yourself.

Now, let’s learn how Robert creates a system capable of handling 10 million concurrent connections...

0
Your rating: None
Original author: 
Megan Geuss

A website built by two programmers, Stephen LaPorte and Mahmoud Hashemi, displays recent changes to Wikipedia in real-time on a map of the world. When a new change is saved to the crowd-sourced encyclopedia, the title of the edited article shows up on the map with the editor's location according to his or her IP address.

Not all recent changes are counted, however. Actually, the website only maps the contributions made by unregistered Wikipedia users. When such a user makes an edit, they are identified only by IP address. This is just as well—a similar website called Wikistream logs all changes to Wikipedia (although not in such a graphically-friendly way), and watching the flood of new entries can get overwhelming, fast.

LaPorte and Hashemi said they built their map using the JavaScript library D3, datamaps-world.js, a service for searching the geolocation of IP addresses called freegeoip.net, and Wikimedia's recent changes IRC feed. The two programmers note in their blog that “you may see some users add non-productive or disruptive content to Wikipedia. A survey in 2007 indicated that unregistered users are less likely to make productive edits to the encyclopedia.” Helpfully, when you see a change made to a specific article, you can click on that change to view how the page has been edited (and change it back if it merits more editing).

Read 3 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Peter Bright

AMD

AMD wants to talk about HSA, Heterogeneous Systems Architecture (HSA), its vision for the future of system architectures. To that end, it held a press conference last week to discuss what it's calling "heterogeneous Uniform Memory Access" (hUMA). The company outlined what it was doing, and why, both confirming and reaffirming the things it has been saying for the last couple of years.

The central HSA concept is that systems will have multiple different kinds of processors, connected together and operating as peers. The two main kinds of processors are conventional: versatile CPUs and the more specialized GPUs.

Modern GPUs have enormous parallel arithmetic power, especially floating point arithmetic, but are poorly-suited to single-threaded code with lots of branches. Modern CPUs are well-suited to single-threaded code with lots of branches, but less well-suited to massively parallel number crunching. Splitting workloads between a CPU and a GPU, using each for the workloads it's good at, has driven the development of general purpose GPU (GPGPU) software and development.

Read 21 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Jon Brodkin

The Linux Foundation has taken control of the open source Xen virtualization platform and enlisted a dozen industry giants in a quest to be the leading software for building cloud networks.

The 10-year-old Xen hypervisor was formerly a community project sponsored by Citrix, much as the Fedora operating system is a community project sponsored by Red Hat. Citrix was looking to place Xen into a vendor-neutral organization, however, and the Linux Foundation move was announced today. The list of companies that will "contribute to and guide the Xen Project" is impressive, including Amazon Web Services, AMD, Bromium, Calxeda, CA Technologies, Cisco, Citrix, Google, Intel, Oracle, Samsung, and Verizon.

Amazon is perhaps the most significant name on that list in regard to Xen. The Amazon Elastic Compute Cloud is likely the most widely used public infrastructure-as-a-service (IaaS) cloud, and it is built on Xen virtualization. Rackspace's public cloud also uses Xen. Linux Foundation Executive Director Jim Zemlin noted in his blog that Xen "is being deployed in public IaaS environments by some of the world's largest companies."

Read 4 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Soulskill

An anonymous reader writes "In late 2011, Cornell University won a prize from NYC Mayor Bloomberg's contest to design a new science school. Google donated some space in Manhattan, and since January this year students have been enrolled in the school's 'beta class, a one-year master's program in computer science.' The beta curriculum is designed to equip the students with all the knowledge they need to jump right into a tech startup: there's a mandatory business class, the U.S. Commerce Department stationed a patent officer on-site, and mentors from the private sector are brought in to help with design. 'The curriculum will not be confined to standard disciplines, but will combine fields like electrical engineering, software development and social sciences, and professors will teach across those boundaries. In fact, no professor has an office, not even the dean, and Dr. Huttenlocher insists they will not when the campus moves to Roosevelt Island, either. Instead, each person has a desk with low dividers, and people can grab conference rooms as needed — much like the headquarters of a small tech company.' It's a long, interesting article about how they're trying to turn 'tech school' into something a lot more rigorous and innovative than something like ITT Tech."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None
Original author: 
Andrew Cunningham

Andrew Cunningham / Aurich Lawson

A desktop PC used to need a lot of different chips to make it work. You had the big parts: the CPU that executed most of your code and the GPU that rendered your pretty 3D graphics. But there were a lot of smaller bits too: a chip called the northbridge handled all communication between the CPU, GPU, and RAM, while the southbridge handled communication between the northbridge and other interfaces like USB or SATA. Separate controller chips for things like USB ports, Ethernet ports, and audio were also often required if this functionality wasn't already integrated into the southbridge itself.

As chip manufacturing processes have improved, it's now possible to cram more and more of these previously separate components into a single chip. This not only reduces system complexity, cost, and power consumption, but it also saves space, making it possible to fit a high-end computer from yesteryear into a smartphone that can fit in your pocket. It's these technological advancements that have given rise to the system-on-a-chip (SoC), one monolithic chip that's home to all of the major components that make these devices tick.

The fact that every one of these chips includes what is essentially an entire computer can make keeping track of an individual chip's features and performance quite time-consuming. To help you keep things straight, we've assembled this handy guide that will walk you through the basics of how an SoC is put together. It will also serve as a guide to most of the current (and future, where applicable) chips available from the big players making SoCs today: Apple, Qualcomm, Samsung, Nvidia, Texas Instruments, Intel, and AMD. There's simply too much to talk about to fit everything into one article of reasonable length, but if you've been wondering what makes a Snapdragon different from a Tegra, here's a start.

Read 56 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Soulskill

concealment sends this quote from the NY Times: "Today’s chips are made on large wafers that hold hundreds of fingernail-sized dies, each with the same electronic circuit. The wafers are cut into individual dies and packaged separately, only to be reassembled on printed circuit boards, which may each hold dozens or hundreds of chips. PARC researchers have a very different model in mind. ... they have designed a laser-printer-like machine that will precisely place tens or even hundreds of thousands of chiplets, each no larger than a grain of sand, on a surface in exactly the right location and in the right orientation. The chiplets can be both microprocessors and computer memory as well as the other circuits needed to create complete computers. They can also be analog devices known as microelectromechanical systems, or MEMS, that perform tasks like sensing heat, pressure or motion. The new manufacturing system the PARC researchers envision could be used to build custom computers one at a time, or as part of a 3-D printing system that makes smart objects with computing woven right into them."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None