Skip navigation
Help

Technology Lab

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Jon Brodkin

The Linux Foundation has taken control of the open source Xen virtualization platform and enlisted a dozen industry giants in a quest to be the leading software for building cloud networks.

The 10-year-old Xen hypervisor was formerly a community project sponsored by Citrix, much as the Fedora operating system is a community project sponsored by Red Hat. Citrix was looking to place Xen into a vendor-neutral organization, however, and the Linux Foundation move was announced today. The list of companies that will "contribute to and guide the Xen Project" is impressive, including Amazon Web Services, AMD, Bromium, Calxeda, CA Technologies, Cisco, Citrix, Google, Intel, Oracle, Samsung, and Verizon.

Amazon is perhaps the most significant name on that list in regard to Xen. The Amazon Elastic Compute Cloud is likely the most widely used public infrastructure-as-a-service (IaaS) cloud, and it is built on Xen virtualization. Rackspace's public cloud also uses Xen. Linux Foundation Executive Director Jim Zemlin noted in his blog that Xen "is being deployed in public IaaS environments by some of the world's largest companies."

Read 4 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Stack Exchange

Stack Exchange

This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.

user396089 is more than competent when it comes to writing code in "bits and pieces." Planning and synthesizing that code into a complex, coherent app is the hard part. "So, my question is, how do I improve my design skills," he asks. And to that, some more experienced programmers answered...

See the original question here.

Read 20 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Sean Gallagher

Think mobile devices are low-power? A study by the Center for Energy-Efficient Telecommunications—a joint effort between AT&T's Bell Labs and the University of Melbourne in Australia—finds that wireless networking infrastructure worldwide accounts for 10 times more power consumption than data centers worldwide. In total, it is responsible for 90 percent of the power usage by cloud infrastructure. And that consumption is growing fast.

The study was in part a rebuttal to a Greenpeace report that focused on the power consumption of data centers. "The energy consumption of wireless access dominates data center consumption by a significant margin," the authors of the CEET study wrote. One of the findings of the CEET researchers was that wired networks and data-center based applications could actually reduce overall computing energy consumption by allowing for less powerful client devices.

According to the CEET study, by 2015, wireless "cloud" infrastructure will consume as much as 43 terawatt-hours of electricity worldwide while generating 30 megatons of carbon dioxide. That's the equivalent of 4.9 million automobiles worth of carbon emissions. This projected power consumption is a 460 percent increase from the 9.2 TWh consumed by wireless infrastructure in 2012.

Read 1 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Peter Bright

Microsoft Accounts—the credentials used for Hotmail, Outlook.com, the Windows Store, and other Microsoft services—will soon offer two-factor authentication to ensure that accounts can't be compromised through disclosure of the password alone.

Revealed by LiveSide, the two factor authentication will use a phone app—which is already available for Windows Phone, even though the two-factor authentication isn't switched on yet—to generate a random code. This code must be entered alongside the password.

For systems that are used regularly, it's possible to disable the code requirement and allow logging in with the password alone. For systems that only accept passwords, such as e-mail clients, it appears that Microsoft will allow the creation of one-off application-specific passwords.

Read 2 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Stack Exchange

Stack Exchange

This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.

Ankit works in J2SE (core java). During code reviews, he's frequently asked to reduce his lines of code (LOC). "It's not about removing redundant code," he writes. To his colleagues, "it's about following a style." Style over substance. Ankit says the readability of his code is suffering due to the dogmatic demands of his code reviewers. So how to find the right balance of brevity and readability?

See the original question here.

Read 20 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Sean Gallagher


The ArxCis-NV DIMM combines DDR3 dynamic memory with a flash memory backup.

Viking Technology

The server world still waits for DDR4, the next generation of dynamic memory, to be ready for prime time. In the meantime, a new set of memory boards from Viking is looking to squeeze more performance out of servers not by providing faster memory, but by making it safer to keep more in memory and less on disk or SSD. Viking Technology has begun supplying dual in-line memory modules that combine DDR3 dynamic memory with NAND flash memory to create non-volatile RAM for servers and storage arrays—modules that don't lose their memory when the systems they're in lose power or shut down.

The ArxCis-NV DIMM, which Viking demonstrated at the Storage Networking Industry Association's SNW Spring conference in Orlando this week, plugs into standard DIMM memory slots in servers and RAID controller cards.  Viking isn't the only player in the non-volatile DIMM game—Micron Technology and AgigA Tech announced their own NVDIMM effort in November—but they're first to market. The modules shipping now to a select group of server manufacturers have 4GB of dynamic RAM and 8GB of NAND memory. Modules with double those figures are planned for later in the year, and modules with 16GB of DRAM and 32GB of NAND are in the works for next year.

The ArxCis can be plugged into existing servers and RAID controllers today as a substitute for battery backed-up (BBU) memory modules. They are even equipped with batteries to power a last-gasp write to NAND memory in the event of a power outage. But the ArxCis is more than a better backup in the event of system failure. Viking's non-volatile DIMMs are primarily aimed at big in-memory computing tasks, such as high-speed in-memory transactional database systems and indices such as those used in search engines and other "hyper-scale" computing applications.  Facebook's "Unicorn" search engine system, for example, keeps massive indices in memory to allow for real-time response to user queries, as does the "type-ahead" feature in Google's search.

Read 2 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Peter Bright

Aurich Lawson (with apologies to Bill Watterson)

Google announced today that it is forking the WebKit rendering engine on which its Chrome browser is based. The company is naming its new engine "Blink."

The WebKit project was started by Apple in 2001, itself a fork of a rendering engine called KHTML. The project includes a core rendering engine for handling HTML and CSS (WebCore), a JavaScript engine (JavaScriptCore), and a high-level API for embedding it into browsers (WebKit).

Though known widely as "WebKit," Google Chrome has used only WebCore since its launch in late 2008. Apple's Safari originally used the WebKit wrapper and now uses its successor, WebKit2. Many other browsers use varying amounts of the WebKit project, including the Symbian S60 browser, the BlackBerry browser, the webOS browser, and the Android browser.

Read 10 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Peter Bright

Mozilla today announced a collaboration with Samsung to produce a new browser engine designed to take full advantage of processors with multiple cores.

For the last couple of years, Mozilla Research has been developing a new programming language, Rust, that's designed to provide the same performance and power as C++, but without the same risk of bugs and security flaws, and with built-in mechanisms for exploiting multicore processors.

Using Rust, the company has been working on a prototype browser engine, named Servo.

Read 9 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Dan Goodin

Aurich Lawson / Thinkstock

Tens of thousands of websites, some operated by The Los Angeles Times, Seagate, and other reputable companies, have recently come under the spell of "Darkleech," a mysterious exploitation toolkit that exposes visitors to potent malware attacks.

The ongoing attacks, estimated to have infected 20,000 websites in the past few weeks alone, are significant because of their success in targeting Apache, by far the Internet's most popular Web server software. Once it takes hold, Darkleech injects invisible code into webpages, which in turn surreptitiously opens a connection that exposes visitors to malicious third-party websites, researchers said. Although the attacks have been active since at least August, no one has been able to positively identify the weakness attackers are using to commandeer the Apache-based machines. Vulnerabilities in Plesk, Cpanel, or other software used to administer websites is one possibility, but researchers aren't ruling out the possibility of password cracking, social engineering, or attacks that exploit unknown bugs in frequently used applications and OSes.

Researchers also don't know precisely how many sites have been infected by Darkleech. The server malware employs a sophisticated array of conditions to determine when to inject malicious links into the webpages shown to end users. Visitors using IP addresses belonging to security and hosting firms are passed over, as are people who have recently been attacked or who don't access the pages from specific search queries. The ability of Darkleech to inject unique links on the fly is also hindering research into the elusive infection toolkit.

Read 14 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Sean Gallagher

Aurich Lawson

A little more than a year ago, details emerged about an effort by some members of the hacktivist group Anonymous to build a new weapon to replace their aging denial-of-service arsenal. The new weapon would use the Internet's Domain Name Service as a force-multiplier to bring the servers of those who offended the group to their metaphorical knees. Around the same time, an alleged plan for an Anonymous operation, "Operation Global Blackout" (later dismissed by some security experts and Anonymous members as a "massive troll"), sought to use the DNS service against the very core of the Internet itself in protest against the Stop Online Piracy Act.

This week, an attack using the technique proposed for use in that attack tool and operation—both of which failed to materialize—was at the heart of an ongoing denial-of-service assault on Spamhaus, the anti-spam clearing house organization. And while it hasn't brought the Internet itself down, it has caused major slowdowns in the Internet's core networks.

DNS Amplification (or DNS Reflection) remains possible after years of security expert warnings. Its power is a testament to how hard it is to get organizations to make simple changes that would prevent even recognized threats. Some network providers have made tweaks that prevent botnets or "volunteer" systems within their networks to stage such attacks. But thanks to public cloud services, "bulletproof" hosting services, and other services that allow attackers to spawn and then reap hundreds of attacking systems, DNS amplification attacks can still be launched at the whim of a deep-pocketed attacker—like, for example, the cyber-criminals running the spam networks that Spamhaus tries to shut down.

Read 16 remaining paragraphs | Comments

0
Your rating: None