Skip navigation
Help

Samsung

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

An anonymous reader sends this news from Al-Jazeera:
"BP has been accused of hiring internet 'trolls' to purposefully attack, harass, and sometimes threaten people who have been critical of how the oil giant has handled its disaster in the Gulf of Mexico. The oil firm hired the international PR company Ogilvy & Mather to run the BP America Facebook page during the oil disaster, which released at least 4.9 million barrels of oil into the Gulf in what is to date the single largest environmental disaster in U.S. history. The page was meant to encourage interaction with BP, but when people posted comments that were critical of how BP was handling the crisis, they were often attacked, bullied, and sometimes directly threatened. ... BP's 'astroturfing' efforts and use of 'trolls' have been reported as pursuing users' personal information, then tracking and posting IP addresses of users, contacting their employers, threatening to contact family members, and using photos of critics' family members to create false Facebook profiles, and even threatening to affect the potential outcome of individual compensation claims against BP."

0
Your rating: None
Original author: 
Andrew Cunningham


I log some face-on time with Glass at Google I/O.

Florence Ion

"When you're at a concert and the band takes the stage, nowadays 50,000 phones and tablets go into the air," said Google Senior Development Advocate Timothy Jordan in the first Google Glass session of this year's Google I/O. "Which isn't all that weird, except that people seem to be looking at the tablets more than they are the folks onstage or the experience that they're having. It's crazy because we love what technology gives us, but it's a bummer when it gets in the way, when it gets between us and our lives, and that's what Glass is addressing."

The upshot of this perspective is that Glass and its software is designed for quick use. You fire it up, do what you want to do, and get back to your business without the time spent diving into your pocket for your phone, unlocking it, and so on. Whether this process is more distracting than talking to someone with Glass strapped to his or her face is another conversation, but this is the problem that Google is attempting to solve.

Since Google I/O is a developer's conference, the Glass sessions didn't focus on the social implications of using Glass or the privacy questions that some have raised. Rather, the focus was on how to make applications for this new type of device, something that is designed to give you what you want at a moment's notice and then get out of the way. Here's a quick look at what that ethos does to the platform's applications.

Read 22 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Andrew Cunningham

So far this year's Google I/O has been very developer-centric—perhaps not surprising given that I/O is, at the end of the day, a developer's conference. Especially compared to last year's skydiving, Glass-revealing, Nexus-introducing keynote, yesterday's three-and-a-half-hour keynote presentation focused overwhelmingly on back-end technologies rather than concrete products aimed at consumers.

There's still plenty to see. All this year we've been taking photos to show you just what it's like to cover these shows—we've shown you things as large as CES and as small as Nvidia's GPU Technology Conference. Our pictures from the first day of Google I/O should give you some idea of what it's like to attend a developer conference for one of tech's most influential companies.


You are here

I/O is held in the west hall of the Moscone Center, and between the giant Google signs and this real-life Google Maps pin you'd be hard-pressed to miss it.

Andrew Cunningham

20 more images in gallery

Read on Ars Technica | Comments

0
Your rating: None
Original author: 
Casey Johnston


Pichai seems open to Android meaning lots of different things to lots of people and companies.

It Came from China

An interview with Sundar Pichai over at Wired has settled some questions about suspected Google plans, rivalries, and alliances. Pichai was recently announced as Andy Rubin’s replacement as head of Android, and he expressed cool confidence ahead of Google I/O about the company’s relationships with both Facebook and Samsung. He even felt good about the future of the spotty Android OS update situation.

Tensions between Google and Samsung, the overwhelmingly dominant Android handset manufacturer, are reportedly rising. But Pichai expressed nothing but goodwill toward the company. “We work with them on pretty much almost all our important products,” Pichai said while brandishing his own Samsung Galaxy S 4. “Samsung plays a critical role in helping Android be successful.”

Pichai noted in particular the need for companies that make “innovation in displays [and] in batteries” a priority. His attitude toward Motorola, which Google bought almost two years ago, was more nonchalant: “For the purposes of the Android ecosystem, Motorola is [just another] partner.”

Read 5 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Peter Bright

AMD

AMD wants to talk about HSA, Heterogeneous Systems Architecture (HSA), its vision for the future of system architectures. To that end, it held a press conference last week to discuss what it's calling "heterogeneous Uniform Memory Access" (hUMA). The company outlined what it was doing, and why, both confirming and reaffirming the things it has been saying for the last couple of years.

The central HSA concept is that systems will have multiple different kinds of processors, connected together and operating as peers. The two main kinds of processors are conventional: versatile CPUs and the more specialized GPUs.

Modern GPUs have enormous parallel arithmetic power, especially floating point arithmetic, but are poorly-suited to single-threaded code with lots of branches. Modern CPUs are well-suited to single-threaded code with lots of branches, but less well-suited to massively parallel number crunching. Splitting workloads between a CPU and a GPU, using each for the workloads it's good at, has driven the development of general purpose GPU (GPGPU) software and development.

Read 21 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Andrew Cunningham


The Galaxy S 4's display is a sizable step forward for PenTile AMOLED, according to DisplayMate's Raymond Soneira.

Florence Ion

We've already given you our subjective impressions of Samsung's Galaxy S 4 and its 1080p AMOLED display, but for those of you who hunger for quantitative data, Dr. Raymond Soneira of DisplayMate has given the phone an in-depth shakedown. Soneira compares the screen's brightness, contrast, color gamut, and power consumption to both the Galaxy S III (which also uses an AMOLED display) and the IPS panel in the iPhone 5. What he found was that Samsung's AMOLED technology is still fighting against some of its inherent weaknesses, but it has made great strides forward even since the Galaxy S III was released last year.

To recap: both the S III and S 4 use PenTile AMOLED screens, which use a slightly different pixel arrangement than traditional LCD screens. A pixel in a standard LCD panel has one red, one green, and one blue stripe; PenTile uses alternating red-green-blue-green subpixels, taking advantage of the eye's sensitivity to green to display the same image using fewer total subpixels. These screens cost less to manufacture but can have issues with color accuracy and text crispness. The backlight for each type of display is also different—white LEDs behind the iPhone's display shine through the red, green, and blue subpixels to create an image, while the AMOLED subpixels are self-lit. This has implications for brightness, contrast, and power consumption.


A close-up shot of PenTile AMOLED in the Nexus One, when the tech was much less mature. Luke Hutchinson

We'll try to boil Soneira's findings down to their essence. One of the S 4's benefits over its predecessor is (obviously) its pixel density, which at 441 ppi is considerably higher than either its predecessor or the iPhone 5. Soneira says that this helps it to overcome the imbalance between PenTile's green subpixels and its less numerous red and blue ones, which all but banishes PenTile's "fuzzy text" issues:

Read 5 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Andrew Cunningham

Aurich Lawson / Thinkstock

Welcome back to our three-part series on touchscreen technology. Last time, Florence Ion walked you through the technology's past, from the invention of the first touchscreens in the 1960s all the way up through the mid-2000s. During this period, different versions of the technology appeared in everything from PCs to early cell phones to personal digital assistants like Apple's Newton and the Palm Pilot. But all of these gadgets proved to be little more than a tease, a prelude to the main event. In this second part in our series, we'll be talking about touchscreens in the here-and-now.

When you think about touchscreens today, you probably think about smartphones and tablets, and for good reason. The 2007 introduction of the iPhone kicked off a transformation that turned a couple of niche products—smartphones and tablets—into billion-dollar industries. The current fierce competition from software like Android and Windows Phone (as well as hardware makers like Samsung and a host of others) means that new products are being introduced at a frantic pace.

The screens themselves are just one of the driving forces that makes these devices possible (and successful). Ever-smaller, ever-faster chips allow a phone to do things only a heavy-duty desktop could do just a decade or so ago, something we've discussed in detail elsewhere. The software that powers these devices is more important, though. Where older tablets and PDAs required a stylus or interaction with a cramped physical keyboard or trackball to use, mobile software has adapted to be better suited to humans' native pointing device—the larger, clumsier, but much more convenient finger.

Read 22 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Jon Brodkin

The Linux Foundation has taken control of the open source Xen virtualization platform and enlisted a dozen industry giants in a quest to be the leading software for building cloud networks.

The 10-year-old Xen hypervisor was formerly a community project sponsored by Citrix, much as the Fedora operating system is a community project sponsored by Red Hat. Citrix was looking to place Xen into a vendor-neutral organization, however, and the Linux Foundation move was announced today. The list of companies that will "contribute to and guide the Xen Project" is impressive, including Amazon Web Services, AMD, Bromium, Calxeda, CA Technologies, Cisco, Citrix, Google, Intel, Oracle, Samsung, and Verizon.

Amazon is perhaps the most significant name on that list in regard to Xen. The Amazon Elastic Compute Cloud is likely the most widely used public infrastructure-as-a-service (IaaS) cloud, and it is built on Xen virtualization. Rackspace's public cloud also uses Xen. Linux Foundation Executive Director Jim Zemlin noted in his blog that Xen "is being deployed in public IaaS environments by some of the world's largest companies."

Read 4 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Andrew Cunningham

Andrew Cunningham / Aurich Lawson

A desktop PC used to need a lot of different chips to make it work. You had the big parts: the CPU that executed most of your code and the GPU that rendered your pretty 3D graphics. But there were a lot of smaller bits too: a chip called the northbridge handled all communication between the CPU, GPU, and RAM, while the southbridge handled communication between the northbridge and other interfaces like USB or SATA. Separate controller chips for things like USB ports, Ethernet ports, and audio were also often required if this functionality wasn't already integrated into the southbridge itself.

As chip manufacturing processes have improved, it's now possible to cram more and more of these previously separate components into a single chip. This not only reduces system complexity, cost, and power consumption, but it also saves space, making it possible to fit a high-end computer from yesteryear into a smartphone that can fit in your pocket. It's these technological advancements that have given rise to the system-on-a-chip (SoC), one monolithic chip that's home to all of the major components that make these devices tick.

The fact that every one of these chips includes what is essentially an entire computer can make keeping track of an individual chip's features and performance quite time-consuming. To help you keep things straight, we've assembled this handy guide that will walk you through the basics of how an SoC is put together. It will also serve as a guide to most of the current (and future, where applicable) chips available from the big players making SoCs today: Apple, Qualcomm, Samsung, Nvidia, Texas Instruments, Intel, and AMD. There's simply too much to talk about to fit everything into one article of reasonable length, but if you've been wondering what makes a Snapdragon different from a Tegra, here's a start.

Read 56 remaining paragraphs | Comments

0
Your rating: None