Skip navigation
Help

Nvidia

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Andrew Cunningham

Andrew Cunningham / Aurich Lawson

A desktop PC used to need a lot of different chips to make it work. You had the big parts: the CPU that executed most of your code and the GPU that rendered your pretty 3D graphics. But there were a lot of smaller bits too: a chip called the northbridge handled all communication between the CPU, GPU, and RAM, while the southbridge handled communication between the northbridge and other interfaces like USB or SATA. Separate controller chips for things like USB ports, Ethernet ports, and audio were also often required if this functionality wasn't already integrated into the southbridge itself.

As chip manufacturing processes have improved, it's now possible to cram more and more of these previously separate components into a single chip. This not only reduces system complexity, cost, and power consumption, but it also saves space, making it possible to fit a high-end computer from yesteryear into a smartphone that can fit in your pocket. It's these technological advancements that have given rise to the system-on-a-chip (SoC), one monolithic chip that's home to all of the major components that make these devices tick.

The fact that every one of these chips includes what is essentially an entire computer can make keeping track of an individual chip's features and performance quite time-consuming. To help you keep things straight, we've assembled this handy guide that will walk you through the basics of how an SoC is put together. It will also serve as a guide to most of the current (and future, where applicable) chips available from the big players making SoCs today: Apple, Qualcomm, Samsung, Nvidia, Texas Instruments, Intel, and AMD. There's simply too much to talk about to fit everything into one article of reasonable length, but if you've been wondering what makes a Snapdragon different from a Tegra, here's a start.

Read 56 remaining paragraphs | Comments

0
Your rating: None
Original author: 
samzenpus

An anonymous reader writes "Linux developers are now working on open-source 3D support for NVIDIA's Tegra in cooperation with NVIDIA and months after the company published open-source 2D driver code. There are early patches for the Linux kernel along with a Gallium3D driver. The Tegra Gallium3D driver isn't too far along yet but is enough to run Wayland with Weston."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

STARING EYES!
Faces are everywhere in games. NVIDIA noticed this and has been on a 20-year odyssey to make faces more facey and less unfacey (while making boobs less booby, if you’ll remember the elf-lady Dawn). Every few years they push out more facey and less unfacey face tech and make it gurn for our fetishistic graphicsface pleasure. Last night at NVIDIA’s GPU Technology Conference, NVIDIA founder Jen-Hsun Huang showed off Face Works, the latest iteration. Want to see how less unfacey games faces can be?

(more…)

0
Your rating: None


Nvidia CEO Jen-Hsun Huang unveils the Nvidia Grid server at the company's CES presentation.

Andrew Cunningham

The most interesting news to come out of Nvidia's two-hour-plus press conference Sunday night was doubtlessly the Tegra 4 mobile processor, followed closely by its intriguing Project Shield tablet-turned-handheld game console. However, company CEO Jen-Hsun Huang also revealed a small morsel of news about the cloud gaming initiative that Nvidia talked up back in May: the Nvidia Grid, the company's own server designed specifically to serve as the back-end for cloud gaming services.

Thus far, virtualized and streaming game services have not set the world on fire. OnLive probably had the highest profile of any such service, and though it continues to live on, it has been defined more by its troubled financial history than its success.

We stopped by Nvidia's CES booth to get some additional insight into just how the Grid servers do their thing and how Nvidia is looking to overcome the technical drawbacks inherent to cloud gaming services.

Read 14 remaining paragraphs | Comments

0
Your rating: None

Though virtual machines have become indispensable in the server room over the last few years, desktop virtualization has been less successful. One of the reasons has been performance, and specifically graphics performance—modern virtualization products are generally pretty good at dynamically allocating CPU power, RAM, and drive space as clients need them, but graphics performance just hasn't been as good as it is on an actual desktop.

NVIDIA wants to solve this problem with its VGX virtualization platform, which it unveiled at its GPU Technology Conference in May. As pitched, the technology will allow virtual machines to use a graphics card installed in a server to accelerate applications, games, and video. Through NVIDIA's VGX Hypervisor, compatible virtualization software (primarily from Citrix, though Microsoft's RemoteFX is also partially supported) can use the GPU directly, allowing thin clients, tablets, and other devices to more closely replicate the experience of using actual desktop hardware.

Enlarge / NVIDIA's VGX K1 is designed to bring basic graphics acceleration to a relatively large number of users. NVIDIA

When last we heard about the hardware that drives this technology, NVIDIA was talking up a board with four GPUs based on its Kepler architecture. That card, now known as the NVIDIA VGX K1, is built to provide basic 3D and video acceleration to a large number of users—up to 100, according to NVIDIA's marketing materials. Each of this card's four GPUs uses 192 of NVIDIA's graphics cores and 4GB of DDR3 RAM (for a total of 768 cores and 16GB of memory), and has a reasonably modest TDP of 150 watts—for reference, NVIDIA's high-end GTX 680 desktop graphics card has a TDP of 195W, and the dual-GPU version (the GTX 690) steps this up to 300W.

Read 7 remaining paragraphs | Comments

0
Your rating: None

Photo

Intel has been working on its "Many Integrated Core" architecture for a quite some time, but the chipmaker has finally taken the code-name gloves off and announced that Knights Corner will be the first in a new family of Xeon processors — the Xeon Phi. These co-processors will debut later this year (Intel says "by the end of 2012"), and will come in the form of a 50-core PCIe card that includes at least 8GB of GDDR5 RAM. The card runs an independent Linux operating system that manages each x86 core, and Intel is hoping that giving developers a familiar architecture to program for will make the Xeon Phi a much more attractive platform than Nvidia's Tesla.

The Phi is part of Intel's High Performance Computing (HPC) program, where the...

Continue reading…

0
Your rating: None

Last week, Linux creator Linus Torvalds was in Finland giving a talk when a woman in the crowd asked why chip maker Nvidia refused to support Linux on a brand new machine she had bought.

Torvalds had some choice words -- and gestures -- about it.  Torvalds declared Nvidia as “the single worst company we’ve ever dealt with” and says that it's "really sad" because Nvidia wants to sell a lot of chips into the Android market. (Android is based on Linux).

And then he turned to the camera and flipped Nvidia off.

Skip ahead to 49:25 to see the action.

Please follow SAI: Enterprise on Twitter and Facebook.

Join the conversation about this story »

0
Your rating: None