Skip navigation
Help

Kenyon College

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Andrew Cunningham


I log some face-on time with Glass at Google I/O.

Florence Ion

"When you're at a concert and the band takes the stage, nowadays 50,000 phones and tablets go into the air," said Google Senior Development Advocate Timothy Jordan in the first Google Glass session of this year's Google I/O. "Which isn't all that weird, except that people seem to be looking at the tablets more than they are the folks onstage or the experience that they're having. It's crazy because we love what technology gives us, but it's a bummer when it gets in the way, when it gets between us and our lives, and that's what Glass is addressing."

The upshot of this perspective is that Glass and its software is designed for quick use. You fire it up, do what you want to do, and get back to your business without the time spent diving into your pocket for your phone, unlocking it, and so on. Whether this process is more distracting than talking to someone with Glass strapped to his or her face is another conversation, but this is the problem that Google is attempting to solve.

Since Google I/O is a developer's conference, the Glass sessions didn't focus on the social implications of using Glass or the privacy questions that some have raised. Rather, the focus was on how to make applications for this new type of device, something that is designed to give you what you want at a moment's notice and then get out of the way. Here's a quick look at what that ethos does to the platform's applications.

Read 22 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Andrew Cunningham

So far this year's Google I/O has been very developer-centric—perhaps not surprising given that I/O is, at the end of the day, a developer's conference. Especially compared to last year's skydiving, Glass-revealing, Nexus-introducing keynote, yesterday's three-and-a-half-hour keynote presentation focused overwhelmingly on back-end technologies rather than concrete products aimed at consumers.

There's still plenty to see. All this year we've been taking photos to show you just what it's like to cover these shows—we've shown you things as large as CES and as small as Nvidia's GPU Technology Conference. Our pictures from the first day of Google I/O should give you some idea of what it's like to attend a developer conference for one of tech's most influential companies.


You are here

I/O is held in the west hall of the Moscone Center, and between the giant Google signs and this real-life Google Maps pin you'd be hard-pressed to miss it.

Andrew Cunningham

20 more images in gallery

Read on Ars Technica | Comments

0
Your rating: None
Original author: 
Andrew Cunningham


The Galaxy S 4's display is a sizable step forward for PenTile AMOLED, according to DisplayMate's Raymond Soneira.

Florence Ion

We've already given you our subjective impressions of Samsung's Galaxy S 4 and its 1080p AMOLED display, but for those of you who hunger for quantitative data, Dr. Raymond Soneira of DisplayMate has given the phone an in-depth shakedown. Soneira compares the screen's brightness, contrast, color gamut, and power consumption to both the Galaxy S III (which also uses an AMOLED display) and the IPS panel in the iPhone 5. What he found was that Samsung's AMOLED technology is still fighting against some of its inherent weaknesses, but it has made great strides forward even since the Galaxy S III was released last year.

To recap: both the S III and S 4 use PenTile AMOLED screens, which use a slightly different pixel arrangement than traditional LCD screens. A pixel in a standard LCD panel has one red, one green, and one blue stripe; PenTile uses alternating red-green-blue-green subpixels, taking advantage of the eye's sensitivity to green to display the same image using fewer total subpixels. These screens cost less to manufacture but can have issues with color accuracy and text crispness. The backlight for each type of display is also different—white LEDs behind the iPhone's display shine through the red, green, and blue subpixels to create an image, while the AMOLED subpixels are self-lit. This has implications for brightness, contrast, and power consumption.


A close-up shot of PenTile AMOLED in the Nexus One, when the tech was much less mature. Luke Hutchinson

We'll try to boil Soneira's findings down to their essence. One of the S 4's benefits over its predecessor is (obviously) its pixel density, which at 441 ppi is considerably higher than either its predecessor or the iPhone 5. Soneira says that this helps it to overcome the imbalance between PenTile's green subpixels and its less numerous red and blue ones, which all but banishes PenTile's "fuzzy text" issues:

Read 5 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Andrew Cunningham

Aurich Lawson / Thinkstock

Welcome back to our three-part series on touchscreen technology. Last time, Florence Ion walked you through the technology's past, from the invention of the first touchscreens in the 1960s all the way up through the mid-2000s. During this period, different versions of the technology appeared in everything from PCs to early cell phones to personal digital assistants like Apple's Newton and the Palm Pilot. But all of these gadgets proved to be little more than a tease, a prelude to the main event. In this second part in our series, we'll be talking about touchscreens in the here-and-now.

When you think about touchscreens today, you probably think about smartphones and tablets, and for good reason. The 2007 introduction of the iPhone kicked off a transformation that turned a couple of niche products—smartphones and tablets—into billion-dollar industries. The current fierce competition from software like Android and Windows Phone (as well as hardware makers like Samsung and a host of others) means that new products are being introduced at a frantic pace.

The screens themselves are just one of the driving forces that makes these devices possible (and successful). Ever-smaller, ever-faster chips allow a phone to do things only a heavy-duty desktop could do just a decade or so ago, something we've discussed in detail elsewhere. The software that powers these devices is more important, though. Where older tablets and PDAs required a stylus or interaction with a cramped physical keyboard or trackball to use, mobile software has adapted to be better suited to humans' native pointing device—the larger, clumsier, but much more convenient finger.

Read 22 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Andrew Cunningham

Andrew Cunningham / Aurich Lawson

A desktop PC used to need a lot of different chips to make it work. You had the big parts: the CPU that executed most of your code and the GPU that rendered your pretty 3D graphics. But there were a lot of smaller bits too: a chip called the northbridge handled all communication between the CPU, GPU, and RAM, while the southbridge handled communication between the northbridge and other interfaces like USB or SATA. Separate controller chips for things like USB ports, Ethernet ports, and audio were also often required if this functionality wasn't already integrated into the southbridge itself.

As chip manufacturing processes have improved, it's now possible to cram more and more of these previously separate components into a single chip. This not only reduces system complexity, cost, and power consumption, but it also saves space, making it possible to fit a high-end computer from yesteryear into a smartphone that can fit in your pocket. It's these technological advancements that have given rise to the system-on-a-chip (SoC), one monolithic chip that's home to all of the major components that make these devices tick.

The fact that every one of these chips includes what is essentially an entire computer can make keeping track of an individual chip's features and performance quite time-consuming. To help you keep things straight, we've assembled this handy guide that will walk you through the basics of how an SoC is put together. It will also serve as a guide to most of the current (and future, where applicable) chips available from the big players making SoCs today: Apple, Qualcomm, Samsung, Nvidia, Texas Instruments, Intel, and AMD. There's simply too much to talk about to fit everything into one article of reasonable length, but if you've been wondering what makes a Snapdragon different from a Tegra, here's a start.

Read 56 remaining paragraphs | Comments

0
Your rating: None


Bezos' "remote display" patent envisions tablets and e-readers that are just screens—power and processing is provided wirelessly by a central system.

US Patent & Trademark Office

It seems like everyone is trying to jump on the cloud computing bandwagon, but Amazon Chairman and CEO Jeff Bezos wants to take it to a whole new level. GeekWire reports that he and Gregory Hart have filed a patent for "remote displays" that would get data and power from a centrally located "primary station." The tablets or e-readers would simply be screens, and the need for a large internal battery or significant local processing power would theoretically be obviated by the primary station.

The patent sees processors and large internal batteries as the next major roadblocks in the pursuit of thinner and lighter devices. "The ability to continue to reduce the form factor of many of today's devices is somewhat limited, however, as the devices typically include components such as processors and batteries that limit the minimum size and weight of the device. While the size of a battery is continuously getting smaller, the operational or functional time of these smaller batteries is often insufficient for many users."

The full patent is an interesting read, since it presents other potential use cases for these "remote displays" that wouldn't necessarily need to wait on this theoretical fully wireless future-tablet to come to pass. For example: a camera or sensor could detect when a hand is passed over an e-reader display and respond by turning the page. A touch-sensitive casing could detect when a child is handling a display by measuring things like the length and width of their fingers and then disable purchasing of new content or the ability to access "inappropriate" content.

Read 1 remaining paragraphs | Comments

0
Your rating: None


Mark Cerny gives us our first look at the PS4's internals.

Andrew Cunningham

By the time Sony unveiled the PlayStation 4 at last night's press conference, the rumor mill had already basically told us what the console would be made of inside the (as-yet-nonexistent) box: an x86 processor and GPU from AMD and lots of memory.

Sony didn't reveal all of the specifics about its new console last night (and, indeed, the console itself was a notable no-show), but it did give us enough information to be able to draw some conclusions about just what the hardware can do. Let's talk about what components Sony is using, why it's using them, and what kind of performance we can expect from Sony's latest console when it ships this holiday season.

The CPU


AMD's Jaguar architecture, used for the PS4's eight CPU cores, is a follow-up to the company's Bobcat architecture for netbooks and low-power devices. AMD

We'll get started with the components of most interest to gamers: the chip that actually pushes all those polygons.

Read 24 remaining paragraphs | Comments

0
Your rating: None

Aurich Lawson

My family has been on the Internet since 1998 or so, but I didn't really think much about Internet security at first. Oh sure, I made sure our eMachines desktop (and its 433Mhz Celeron CPU) was always running the latest Internet Explorer version and I tried not to use the same password for everything. But I didn't give much thought to where my Web traffic was going or what path it took from our computer to the Web server and back. I was dimly aware that e-mail, as one of my teachers put it, was in those days "about as private as sticking your head out the window and yelling." And I didn't do much with that knowledge.

That sort of attitude was dangerous then, and the increasing sophistication of readily available hacking tools makes it even more dangerous now.  Luckily, the state of Internet security has also gotten better—in this article, the first in a five-part series covering online security, we're going to talk a bit about keeping yourself (and your business) safe on the Web. Even if you know what lurks in the dark corners of the Internet, chances are you someone you know doesn't. So consider this guide and its follow-ups as a handy crash course for those unschooled in the nuances of online security. Security aficionados should check out later entries in the series for more advanced information

We'll begin today with some basic information about encryption on the Internet and how to use it to safeguard your personal information as you use the Web, before moving on to malware, mobile app security, and other topics in future entries. 

Read 21 remaining paragraphs | Comments

0
Your rating: None


Nvidia CEO Jen-Hsun Huang unveils the Nvidia Grid server at the company's CES presentation.

Andrew Cunningham

The most interesting news to come out of Nvidia's two-hour-plus press conference Sunday night was doubtlessly the Tegra 4 mobile processor, followed closely by its intriguing Project Shield tablet-turned-handheld game console. However, company CEO Jen-Hsun Huang also revealed a small morsel of news about the cloud gaming initiative that Nvidia talked up back in May: the Nvidia Grid, the company's own server designed specifically to serve as the back-end for cloud gaming services.

Thus far, virtualized and streaming game services have not set the world on fire. OnLive probably had the highest profile of any such service, and though it continues to live on, it has been defined more by its troubled financial history than its success.

We stopped by Nvidia's CES booth to get some additional insight into just how the Grid servers do their thing and how Nvidia is looking to overcome the technical drawbacks inherent to cloud gaming services.

Read 14 remaining paragraphs | Comments

0
Your rating: None

Though virtual machines have become indispensable in the server room over the last few years, desktop virtualization has been less successful. One of the reasons has been performance, and specifically graphics performance—modern virtualization products are generally pretty good at dynamically allocating CPU power, RAM, and drive space as clients need them, but graphics performance just hasn't been as good as it is on an actual desktop.

NVIDIA wants to solve this problem with its VGX virtualization platform, which it unveiled at its GPU Technology Conference in May. As pitched, the technology will allow virtual machines to use a graphics card installed in a server to accelerate applications, games, and video. Through NVIDIA's VGX Hypervisor, compatible virtualization software (primarily from Citrix, though Microsoft's RemoteFX is also partially supported) can use the GPU directly, allowing thin clients, tablets, and other devices to more closely replicate the experience of using actual desktop hardware.

Enlarge / NVIDIA's VGX K1 is designed to bring basic graphics acceleration to a relatively large number of users. NVIDIA

When last we heard about the hardware that drives this technology, NVIDIA was talking up a board with four GPUs based on its Kepler architecture. That card, now known as the NVIDIA VGX K1, is built to provide basic 3D and video acceleration to a large number of users—up to 100, according to NVIDIA's marketing materials. Each of this card's four GPUs uses 192 of NVIDIA's graphics cores and 4GB of DDR3 RAM (for a total of 768 cores and 16GB of memory), and has a reasonably modest TDP of 150 watts—for reference, NVIDIA's high-end GTX 680 desktop graphics card has a TDP of 195W, and the dual-GPU version (the GTX 690) steps this up to 300W.

Read 7 remaining paragraphs | Comments

0
Your rating: None