Skip navigation
Help

Video cards

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Peter Bright

AMD

AMD wants to talk about HSA, Heterogeneous Systems Architecture (HSA), its vision for the future of system architectures. To that end, it held a press conference last week to discuss what it's calling "heterogeneous Uniform Memory Access" (hUMA). The company outlined what it was doing, and why, both confirming and reaffirming the things it has been saying for the last couple of years.

The central HSA concept is that systems will have multiple different kinds of processors, connected together and operating as peers. The two main kinds of processors are conventional: versatile CPUs and the more specialized GPUs.

Modern GPUs have enormous parallel arithmetic power, especially floating point arithmetic, but are poorly-suited to single-threaded code with lots of branches. Modern CPUs are well-suited to single-threaded code with lots of branches, but less well-suited to massively parallel number crunching. Splitting workloads between a CPU and a GPU, using each for the workloads it's good at, has driven the development of general purpose GPU (GPGPU) software and development.

Read 21 remaining paragraphs | Comments

0
Your rating: None

STARING EYES!
Faces are everywhere in games. NVIDIA noticed this and has been on a 20-year odyssey to make faces more facey and less unfacey (while making boobs less booby, if you’ll remember the elf-lady Dawn). Every few years they push out more facey and less unfacey face tech and make it gurn for our fetishistic graphicsface pleasure. Last night at NVIDIA’s GPU Technology Conference, NVIDIA founder Jen-Hsun Huang showed off Face Works, the latest iteration. Want to see how less unfacey games faces can be?

(more…)

0
Your rating: None


Mark Cerny gives us our first look at the PS4's internals.

Andrew Cunningham

By the time Sony unveiled the PlayStation 4 at last night's press conference, the rumor mill had already basically told us what the console would be made of inside the (as-yet-nonexistent) box: an x86 processor and GPU from AMD and lots of memory.

Sony didn't reveal all of the specifics about its new console last night (and, indeed, the console itself was a notable no-show), but it did give us enough information to be able to draw some conclusions about just what the hardware can do. Let's talk about what components Sony is using, why it's using them, and what kind of performance we can expect from Sony's latest console when it ships this holiday season.

The CPU


AMD's Jaguar architecture, used for the PS4's eight CPU cores, is a follow-up to the company's Bobcat architecture for netbooks and low-power devices. AMD

We'll get started with the components of most interest to gamers: the chip that actually pushes all those polygons.

Read 24 remaining paragraphs | Comments

0
Your rating: None


Nvidia CEO Jen-Hsun Huang unveils the Nvidia Grid server at the company's CES presentation.

Andrew Cunningham

The most interesting news to come out of Nvidia's two-hour-plus press conference Sunday night was doubtlessly the Tegra 4 mobile processor, followed closely by its intriguing Project Shield tablet-turned-handheld game console. However, company CEO Jen-Hsun Huang also revealed a small morsel of news about the cloud gaming initiative that Nvidia talked up back in May: the Nvidia Grid, the company's own server designed specifically to serve as the back-end for cloud gaming services.

Thus far, virtualized and streaming game services have not set the world on fire. OnLive probably had the highest profile of any such service, and though it continues to live on, it has been defined more by its troubled financial history than its success.

We stopped by Nvidia's CES booth to get some additional insight into just how the Grid servers do their thing and how Nvidia is looking to overcome the technical drawbacks inherent to cloud gaming services.

Read 14 remaining paragraphs | Comments

0
Your rating: None

Though virtual machines have become indispensable in the server room over the last few years, desktop virtualization has been less successful. One of the reasons has been performance, and specifically graphics performance—modern virtualization products are generally pretty good at dynamically allocating CPU power, RAM, and drive space as clients need them, but graphics performance just hasn't been as good as it is on an actual desktop.

NVIDIA wants to solve this problem with its VGX virtualization platform, which it unveiled at its GPU Technology Conference in May. As pitched, the technology will allow virtual machines to use a graphics card installed in a server to accelerate applications, games, and video. Through NVIDIA's VGX Hypervisor, compatible virtualization software (primarily from Citrix, though Microsoft's RemoteFX is also partially supported) can use the GPU directly, allowing thin clients, tablets, and other devices to more closely replicate the experience of using actual desktop hardware.

Enlarge / NVIDIA's VGX K1 is designed to bring basic graphics acceleration to a relatively large number of users. NVIDIA

When last we heard about the hardware that drives this technology, NVIDIA was talking up a board with four GPUs based on its Kepler architecture. That card, now known as the NVIDIA VGX K1, is built to provide basic 3D and video acceleration to a large number of users—up to 100, according to NVIDIA's marketing materials. Each of this card's four GPUs uses 192 of NVIDIA's graphics cores and 4GB of DDR3 RAM (for a total of 768 cores and 16GB of memory), and has a reasonably modest TDP of 150 watts—for reference, NVIDIA's high-end GTX 680 desktop graphics card has a TDP of 195W, and the dual-GPU version (the GTX 690) steps this up to 300W.

Read 7 remaining paragraphs | Comments

0
Your rating: None

GeForce Grid Nvidia stock 1024

Nvidia just finishing telling us about how it's going to stick a Kepler GPU in the cloud: now, CEO Jen-Hsun Huang is telling us how it will use distributed graphics to stream low-latency video games from the internet to computers that don't have one themselves. Nvidia's partnered with cloud gaming provider Gaikai, and claims that the GeForce Grid GPU has reduced latency of streaming games to just ten milliseconds by capturing and encoding game frames rapidly, and in a single pass, and promises that the enhanced Gaikai service will be available on TVs, tablets and smartphones running Android and iOS.

David Perry from streaming game company Gaikai is on stage to discuss and demo the technology now; Gaikai also announced that it's working...

Continue reading…

0
Your rating: None

via i.imgur.com

Could all those LOLcats, advice animals, rage comics, and various memes you've spent hours laughing online at actually be a new form of artistic self-expression? After taking a look at things from the perspectives of famous artists and philosophers like Andy Warhol and Aristotle, PBS's Idea Channel thinks so, and discusses why in a truly upvote-worthy video.

"Creating images and sharing them with strangers for the purpose of communicating personal experiences? That my friends, is art."

Continue reading…

0
Your rating: None