Skip navigation
Help

computing

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Programmer Steve Losh has written a lengthy explanation of what separates good documentation from bad, and how to go about planning and writing documentation that will actually help people. His overarching point is that documentation should be used to teach, not to dump excessive amounts of unstructured information onto a user. Losh takes many of the common documentation tropes — "read the source," "look at the tests," "read the docstrings" — and makes analogies with learning everyday skills to show how silly they can be. "This is your driving teacher, Ms. Smith. ... If you have any questions about a part of the car while you’re driving, you can ask her and she’ll tell you all about that piece. Here are the keys, good luck!" He has a similar opinion of API strings: "API documentation is like the user’s manual of a car. When something goes wrong and you need to replace a tire it’s a godsend. But if you’re learning to drive it’s not going to help you because people don’t learn by reading alphabetized lists of disconnected information." Losh's advice for wikis is simple and straightforward: "They are bad and terrible. Do not use them."

0
Your rating: None
Original author: 
Ben Cherian

software380

Image copyright isak55

In every emerging technology market, hype seems to wax and wane. One day a new technology is red hot, the next day it’s old hat. Sometimes the hype tends to pan out and concepts such as “e-commerce” become a normal way to shop. Other times the hype doesn’t meet expectations, and consumers don’t buy into paying for e-commerce using Beenz or Flooz. Apparently, Whoopi Goldberg and a slew of big name VCs ended up making a bad bet on the e-currency market in the late 1990s. Whoopi was paid in cash and shares of Flooz. At least, she wasn’t paid in Flooz alone! When investing, some bets are great and others are awful, but often, one only knows the awful ones in retrospect.

What Does “Software Defined” Mean?

In the infrastructure space, there is a growing trend of companies calling themselves “software defined (x).” Often, it’s a vendor that is re-positioning a decades-old product. On occasion, though, it’s smart, nimble startups and wise incumbents seeing a new way of delivering infrastructure. Either way, the term “software defined” is with us to stay, and there is real meaning and value behind it if you look past the hype.

There are three software defined terms that seem to be bandied around quite often: software defined networking, software defined storage, and the software defined data center. I suspect new terms will soon follow, like software defined security and software defined management. What all these “software-defined” concepts really boil down to is: Virtualization of the underlying component and accessibility through some documented API to provision, operate and manage the low-level component.

This trend started once Amazon Web Services came onto the scene and convinced the world that the data center could be abstracted into much smaller units and could be treated as disposable pieces of technology, which in turn could be priced as a utility. Vendors watched Amazon closely and saw how this could apply to the data center of the future.

Since compute was already virtualized by VMware and Xen, projects such as Eucalyptus were launched with the intention to be a “cloud controller” that would manage the virtualized servers and provision virtual machines (VMs). Virtualized storage (a.k.a. software defined storage) was a core part of the offering and projects like OpenStack Swift and Ceph showed the world that storage could be virtualized and accessed programmatically. Today, software defined networking is the new hotness and companies like Midokura, VMware/Nicira, Big Switch and Plexxi are changing the way networks are designed and automated.

The Software Defined Data Center

The software defined data center encompasses all the concepts of software defined networking, software defined storage, cloud computing, automation, management and security. Every low-level infrastructure component in a data center can be provisioned, operated, and managed through an API. Not only are there tenant-facing APIs, but operator-facing APIs which help the operator automate tasks which were previously manual.

An infrastructure superhero might think, “With great accessibility comes great power.” The data center of the future will be the software defined data center where every component can be accessed and manipulated through an API. The proliferation of APIs will change the way people work. Programmers who have never formatted a hard drive will now be able to provision terabytes of data. A web application developer will be able to set up complex load balancing rules without ever logging into a router. IT organizations will start automating the most mundane tasks. Eventually, beautiful applications will be created that mimic the organization’s process and workflow and will automate infrastructure management.

IT Organizations Will Respond and Adapt Accordingly

Of course, this means the IT organization will have to adapt. The new base level of knowledge in IT will eventually include some sort of programming knowledge. Scripted languages like Ruby and Python will soar even higher in popularity. The network administrators will become programmers. The system administrators will become programmers. During this time, DevOps (development + operations) will make serious inroads in the enterprise and silos will be refactored, restructured or flat-out broken down.

Configuration management tools like Chef and Puppet will be the glue for the software defined data center. If done properly, the costs around delivering IT services will be lowered. “Ghosts in the system” will watch all the components (compute, storage, networking, security, etc.) and adapt to changes in real-time to increase utilization, performance, security and quality of service. Monitoring and analytics will be key to realizing this software defined future.

Big Changes in Markets Happen With Very Simple Beginnings

All this amazing innovation comes from two very simple concepts — virtualizing the underlying components and making it accessible through an API.

The IT world might look at the software defined data center and say this is nothing new. We’ve been doing this since the 80s. I disagree. What’s changed is our universal thinking about accessibility. Ten years ago, we wouldn’t have blinked if a networking product came out without an API. Today, an API is part of what we consider a 1.0 release. This thinking is pervasive throughout the data center today with every component. It’s Web 2.0 thinking that shaped cloud computing and now cloud computing is bleeding into enterprise thinking. We’re no longer constrained by the need to have deep specialized knowledge in the low-level components to get basic access to this technology.

With well documented APIs, we have now turned the entire data center into many instruments that can be played by the IT staff (musicians). I imagine the software defined data center to be a Fantasia-like world where Mickey is the IT staff and the brooms are networking, storage, compute and security. The magic is in the coordination, cadence and rhythm of how all the pieces work together. Amazing symphonies of IT will occur in the near future and this is the reason the software defined data center is not a trend to overlook. Maybe Whoopi should take a look at this market instead.

Ben Cherian is a serial entrepreneur who loves playing in the intersection of business and technology. He’s currently the Chief Strategy Officer at Midokura, a network virtualization company. Prior to Midokura, he was the GM of Emerging Technologies at DreamHost, where he ran the cloud business unit. Prior to that, Ben ran a cloud-focused managed services company.

0
Your rating: None
Original author: 
Sean Gallagher


A frame of Timelapse's view of the growth of Las Vegas, Nevada.

Google, USGS

This story has been updated with additional information and corrections provided by Google after the interview.

In May, Google unveiled Earth Engine, a set of technologies and services that combine Google's existing global mapping capabilities with decades of historical satellite data from both NASA and the US Geological Survey (USGS). One of the first products emerging from Earth Engine is Timelapse—a Web-based view of changes on the Earth's surface over the past three decades, published in collaboration with Time magazine.

The "Global Timelapse" images are also viewable through the Earth Engine site, which allows you to pan and zoom to any location on the planet and watch 30 years of change, thanks to 66 million streaming video tiles. The result is "an incontrovertible description of what's happened on our planet due to urban growth, climate change, et cetera," said Google Vice President of Research and Special Initiatives Alfred Spector.

Read 19 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Megan Geuss

The Guardian

The Guardian released an interview today with the man who has been the paper's source for a few now-infamous leaked documents that revealed a vast dragnet maintained by the NSA for gathering information on communications in America. That source is Edward Snowden, 29, an employee of American defense contractor Booz Allen Hamilton and a former technical assistant for the CIA.

When The Guardian published a leaked document on Wednesday of last week that showed a FISA court granting the NSA power to collect the metadata pertaining to phone calls from all of Verizon's customers over a period of three months, it became one of the biggest exposures of privacy invading actions taken by the government without the public's knowledge.

That is, until the next day, when The Guardian and The Washington Post revealed slides pertaining to another NSA project called PRISM, which apparently gathered vast swaths of information on users of Google services, Facebook, Apple, and more. While the companies named in the PRISM slides have all denied participation in such a program, President Obama and a number of senators confirmed the collection of phone call metadata on Friday.

Read 9 remaining paragraphs | Comments

0
Your rating: None
Original author: 
noreply@blogger.com (Mitchell Whitelaw)

At CODE2012 I presented a paper on "programmable matter" and the proto-computational work of Ralf Baecker and Martin Howse - part of a long-running project on digital materiality. My sources included interviews with the artists, which I will be publishing here. Ralf Baecker's 2009 The Conversation is a complex physical network, woven from solenoids - electro-mechanical "bits" or binary switches. It was one of the works that started me thinking about this notion of the proto-computational - where artists seem to be stripping digital computing down to its raw materials, only to rebuild it as something weirder. Irrational Computing (2012) - which crafts a "computer" more like a modular synth made from crystals and wires - takes this approach further. Here Baecker begins by responding to this notion of proto-computing.

MW: In your work, especially Irrational Computing, we seem to see some of the primal, material elements of digital computing. But this "proto" computing is also quite unfamiliar - it is chaotic, complex and emergent, we can't control or "program" it, and it is hard to identify familiar elements such as memory vs processor. So it seems that your work is not only deconstructing computing - revealing its components - but also reconstructing it in a strange new form. Would you agree?

RB: It took me a long time to adopt the term "proto-computing". I don't mean proto in a historical or chronological sense; it is more about its state of development. I imagine a device that refers to the raw material dimension of our everyday digital machinery. Something that suddenly appears due to the interaction of matter. What I had in mind was for instance the natural nuclear fission reactor in Oklo, Gabon that was discovered in 1972. A conglomerate of minerals in a rock formation formed the conditions for a functioning nuclear reactor, all by chance. 

Computation is a cultural and not a natural phenomenon; it includes several hundred years of knowledge and cultural technics, these days all compressed into a microscopic form (the CPU). In the 18th century the mechanical tradition of automata and symbolic/mathematical thinking merged into the first calculating and astronomical devices. Also the combinatoric/hermeneutic tradition (e.g. Athanasius Kircher and Ramon Llull) is very influential to me. These automatons/concepts were philosophical and epistemological. They were dialogic devices that let us think further, much against our current utilitarian use of technology. Generative utopia.


Schematic of Irrational Computing courtesy of the artist - click for PDF

MW: Your work stages a fusion of sound, light and material. In Irrational Computing for example we both see and hear the activity of the crystals in the SiC module. Similarly in The Conversation, the solenoids act as both mechanical / symbolic components and sound generators. So there is a strong sense of the unity of the audible and the visual - their shared material origins. (This is unlike conventional audiovisual media for example where the relation between sound and image is highly constructed). It seems that there is a sense of a kind of material continuum or spectrum here, binding electricity, light, sound, and matter together?

RB: My first contact with art or media art came through net art, software art and generative art. I was totally fascinated by it. I started programming generative systems for installations and audiovisual performances. I like a lot of the early screen based computer graphics/animation stuff. The pure reduction to wireframes, simple geometric shapes. I had the feeling that in this case concept and representation almost touch each other. But I got lost working with universial machines (Turing machines). With Rechnender Raum I started to do some kind of subjective reappropriation of the digital. So I started to build my very own non-universal devices. Rechnender Raum could also be read as a kinetic interpretation of a cellular automaton algorithm. Even if the Turing machine is a theoretical machine it feels very plastic to me. It a metaphorical machine that shows the conceptual relation of space and time. Computers are basically transposers between space and time, even without seeing the actual outcome of a simulation. I like to expose the hidden structures. They are more appealing to me than the image on the screen.

MW: There is a theme of complex but insular networks in your work. In The Conversation this is very clear - a network of internal relationships, seeking a dynamic equilibrium. Similarly in Irrational Computing, modules like the phase locked loop have this insular complexity. Can you discuss this a little bit? This tendency reminds me of notions of self-referentiality, for example in the writing of Hofstadter, where recursion and self-reference are both logical paradoxes (as in Godel's theorem) and key attributes of consciousness. Your introverted networks have a strong generative character - where complex dynamics emerge from a tightly constrained set of elements and relationships.

RB: Sure, I'm fascinated by this kind of emergent processes, and how they appear on different scales. But I find it always difficult to use the attribute consciousness. I think these kind of chaotic attractors have a beauty on their own. Regardless how closed these systems look, they are always influenced by its environment. The perfect example for me is the flame of a candle. A very dynamic complex process communicating with its environment, that generates the dynamics.

MW: You describe The Conversation as "pataphysical", and mention the "mystic" and "magic" aspects of Irrational Computing. Can you say some more about this a aspect of your work? Is there a sort of romantic or poetic idea here, about what is beyond the rational, or is this about a more systematic alternative to how we understand the world?

RB: Yes, it refers to an other kind of thinking. A thinking that is anti "cause and reaction". A thinking of hidden relations, connections and uncertainty. I like Claude Lévi-Strauss' term "The Savage Mind".

0
Your rating: None
Original author: 
Eric Johnson

Oculus VR's Palmer Luckey, left, and Nate Mitchell, right. At center, AllThingsD's Lauren Goode tries out the Oculus Rift at CES 2013.

Oculus VR’s Palmer Luckey, left, and Nate Mitchell, right. At center, AllThingsD’s Lauren Goode tries out the Oculus Rift at CES 2013.

This is the second part of our two-part Q&A with Palmer Luckey and Nate Mitchell, the co-founders of virtual-reality gaming company Oculus VR. In Part One, Luckey and Mitchell discussed controlling expectations, what they want from developers, and the challenges of trying to make games do something radically different.

AllThingsD: What do you guys think about Google Glass? They’ve got their dev kits out right now, too, and –

Palmer Luckey: — What’s Google Glass? [laughs]

No, seriously, they’re doing something sort of similar with getting this wearable computing device to developers. Does the early buzz about Glass worry you?

Luckey: No. They’re not a gaming device, and they’re not a VR device, and they’re not an immersive device, and they’re five times more expensive than us.

Nate Mitchell: It’s just a completely different product. Wearable computing is super-interesting, and we’d love to see more wearable computing projects in the market. At Oculus, especially, we’re excited about the possibilities of Google Glass. We’ve seen it, we’ve checked it out, it’s very cool. But if you bring them together –

Luckey: Our image size is like 15 times larger than theirs. It’s like the difference between looking at a watch screen and a 60-inch monitor. It’s just an enormous difference.

snlglass

Mitchell: With the Rift, you’re in there. You’re totally immersed in the world. I think one of the things people keep bringing up (with Glass) is the awkward, the social aspect. For the Rift, you strap into this thing, and you’re gone.

Luckey: It’s about being inside the virtual world, not caring about the real one.

Mitchell: You could put your Glass on in the virtual space.

Luckey: We could do that! We could simulate Glass. … It’s not that hard. You just have a tiny heads-up display floating there. A really tiny one.

Mitchell: I like it.

“Okay, Rift, take a picture. Okay, Rift, record a video …”

Luckey: There’s actually Second Life mods like that. People sell heads-up displays that you can buy.

Mitchell: Really?

Luckey: And they put information in there like distance to waypoints and stuff.

Mitchell: Oh, that’s cool!

Luckey: Yeah, they overlay it on the screen when your character’s wearing it.

I never really “got” Second Life. Minecraft, I can wrap my head around quickly. But Second Life …

Luckey: It’s very difficult to get into. There’s a steep learning curve. The last time I went into Second Life was to buy bitcoins from a crazy guy who was selling them below market value, but you had to go into Second Life to meet with him.

Mitchell: The underbelly of the Internet.

Luckey: They’re actually working on Oculus Rift support, though. The kind of people who make games like Second Life definitely see the potential for virtual reality — being able to step into your virtual life.

And if you’re completely immersed in the game, I guess that wherever you’re playing, you need to trust whoever’s around you.

Mitchell: Absolutely. There’s already some sneaking up on people happening in the office. Someone’s developing, they’re testing the latest integration, and then Palmer comes up and puts his hands on their shoulders: “Heyyyy, Andrew! What’s going on?” There’s a trust factor.

Luckey: Have you seen the Guillotine Simulator? (video below) Some people are showing that without even telling the person what it is: “Here, check this out!” “Whoa, what’s going on?” And then — [guillotine sound effect]

Mitchell: One thing that that does lead into is, we’re exploring ways to just improve the usability of the device. When you put on the Rift, especially with the dev kit, you’re shut off from the outside world. What we’re looking at doing is how can we make it easy to pull it off. Right now, you have to slip it over your head like ski goggles. The dev kit was designed to be this functional tool, not the perfect play-for-10-hours device. With the consumer version, we’re going for that polished user experience.

What about motion sickness? Is it possible to overcome the current need for people to only play for a short period of time on their first go?

Luckey: The better we make the hardware, the easier it’ll be for people to just pick up and play. Right now, the hardware isn’t perfect. That’s one of the innate problems of VR: You’re trying to make something that tricks your brain into thinking it’s real. Your brain is very sensitive at telling you things are wrong. The better you can make it, the more realistic you can make it, the more easily your brain’s gonna accept the illusion and not be throwing warning bells.

You mentioned in one of your recent speeches that the Scout in Team Fortress 2 –

Luckey: — he’s running at like 40 miles per hour. But it’s not just, “Oh, I’m running fast.” It’s the physics of the whole thing. In real life, if you are driving at 40mph, you can’t instantly start moving backward. You can’t instantly start strafing sideways. You have inertia. And that’s something that, right now, games are not designed to have. You’re reacting in these impossible ways.

Mitchell: In that same vein, just as Palmer’s saying the hardware’s not perfect yet, a huge part of it is the content.

Luckey: You could make perfect hardware. Pretend we have the Matrix. Now you take someone and put them in a fighter jet and have them spinning in circles. That’s going to make someone sick no matter how good it is, because that actually does make people sick. If you make perfect hardware, and then you do things that make people sick in real life, you’re gonna make them sick in VR, too. Right now, there’s lots of things going on in games that don’t make people sick only because they’re looking at them on a screen. Or, in so many games, they’ll have cutscenes where they take control of the camera and shake it around. You don’t want to do that in VR because you’re not actually shaking around in real life.

You’re changing the experience that you have previously established within VR.

Mitchell: It breaks the immersion.

Luckey: And that’s why it’s so hard to instantly transfer. In the original version of Half Life 2, when you’d go into a new space for the first time, the whole game would just freeze for a second while it loads. It’s just a short freeze, but players were running along or driving along and all of a sudden, jjt! Now it looks like the whole world’s dragging along with you, and a lot of people feel very queasy when that happens.

Mitchell: It comes back to content. My talk at GDC was very specifically about how developing for VR is different from a 2-D monitor. All those things like cutscenes, storytelling, scale of the world — if the player is at four feet on the 2-D monitor and you put them in there, they immediately notice. They look down and they have the stereo cues: “I’m a midget!” So you make them taller, and now they don’t fit through doors. We really do believe that, at first, you’re going to see these ports of existing games, but the best “killer app” experiences are going to come from those made-for-VR games.

Luckey: And that’s not even to say it has to be new franchises. It doesn’t have to be a new type of game. But you want the content to be designed specifically for the hardware.

Mitchell: It’s just like the iPhone. The best games come from developers pairing hardware and software.

 Dive Into Media.

Oculus VR CEO Brendan Iribe testing out the Rift at D: Dive Into Media.

And that’s the 10,000-foot view: Does VR change game design in a fundamental way?

Mitchell: Yes. Fundamentally. Absolutely. I think, right now, there’s this great renaissance in the indie community. Indie developers are doing awesome things. If you look at games like The Walking Dead, you’ve got the mainstream genres here. You’re going to have a lot of these indie games start to feel more natural in virtual reality, because that’s almost, like, the intended experience.

Luckey: And not to invent a whole new genre on the fly, but you don’t see many first-person card games or something. There’s a lot of card game videogames, but there’s not many that are first-person because it wouldn’t make any sense to do.

Like a poker game where you could look around the table and read people’s reactions?

Mitchell: Exactly.

Luckey: And you could have all kinds of things integrated into it. I guess that would fit into the first-person-shooter genre, but not really, because you’re not moving and you’re not shooting. You’re just playing cards.

Mitchell: And if you look at the research that’s been done on virtual characters, it’s the type of thing where, if you smile at me in VR, even if you’re an NPC (non-playable character), I’m much more likely to smile back. Your brain is tricked into believing you’re there.

Luckey: There’s also fascinating research on confidence levels in VR, even tiny things. There was a study where a bunch of people performed tasks in real life, in a control group, and then performed them in VR. And the only difference is that one group in VR was about six inches taller than the other group. So, one was shorter than the NPC they were interacting with, one was taller. Universally, all of the “taller” people exhibited better negotiation with the NPCs. Then, they took them out (of the VR simulation) and they redid the (real-world) study, putting everyone back in another trial with a physical person. The people who’d been tall in VR and negotiated as a taller person did better when they went back into the real negotiation as well. It’s ridiculous.

Mitchell: That’s the sort of thing we’re super-excited about. That’s the dream.

And do you have a timeline for when –

Mitchell: When the dream comes to fruition?

Luckey: It’s a dream, man! Come on! [laughs]

Not when it comes to fruition. Are there milestones for specific accomplishments along the way?

Luckey: Sure, we have them, internally. [laughs]

Mitchell: We have a road map, but like we keep saying, a huge part of this is content. Without the content, it’s just a pair of ski goggles.

Luckey: And we don’t even know, necessarily, what a road map needs to look like. We’re getting this feedback, and if a lot of people need a certain feature — well, that means it’s going to take a little longer.

Mitchell: But we have a rough road map planned, and a lot of exciting stuff planned that I think you’ll see over the course of the next year.

And is there a timeline for when the first consumer version comes out?

Mitchell: It’s TBD. But what we can say is, Microsoft and Sony release their dev kits years in advance before they get up onstage and say, “The Xbox One is coming.” We went for the same strategy, just open and publicly.

Luckey: And we don’t want to wait many years before doing it.

Mitchell: Right. So, right now, we’re giving developers the chance to build content, but they’re also co-developing the consumer version of the Rift with us. Once everyone’s really happy with it, that’s when you’ll see us come to market.

Luckey: And not sooner. We don’t want to announce something and then push for that date, even though we know we can make it better.

IMG_4929

And what about the company, Oculus VR? Is this dream you’re talking about something you have to realize on your own? Do you want to someday get acquired?

Luckey: Our No. 1 goal is doing it on our own. We’re not looking to get acquired, we’re not looking to flip the company or anything. I mean, partnering with someone? Sure, we’re totally open to discussions. We’re not, like, we want to do this with no help.

But you wouldn’t want to be absorbed into a bigger company that’s doing more than just VR.

Mitchell: The goal has been to build great consumer VR, specifically for gaming. We all believe VR is going to be one of the most important technologies of –

Luckey: — ever!

Mitchell: Basically.

Not to be too hyperbolic or anything.

Luckey: It’s hard not to be. It’s like every other technological advance could practically be moot if you could do all of it in the virtual world. Why would you even need to advance those things in the real world?

Mitchell: Sooo …

Luckey: [laughs]

Mitchell: With that in mind, we have to figure out how we get there. But right now, we’re doing it on our own.

Luckey: And we think we can deliver a good consumer VR experience without having to partner with anyone. We’re open to partnering, but we don’t think we have to. We’re not banking on it.

And how does being based in southern California compare to being closer to a more conventional tech hub like Silicon Valley?

Mitchell: Recruiting is a little harder for us. But overall, we’ve been able to attract incredible talent.

Luckey: And if you’re in Silicon Valley, it’s probably one of the easiest places to start a company in terms of hiring people. But VR is such a tiny field, it’s not like all of a sudden we’re going to go to Silicon Valley and there’s, like, thousands of VR experts. Now, if I’m a Web company or a mobile company –

Mitchell: — that’s where I’d want to be.

Luckey: But in this case, these people aren’t necessarily all up in Silicon Valley. We’ve hired a bunch of people from Texas and Virginia and all these other places. It’s a niche industry. We actually have the biggest concentration of people working in consumer VR right now. And a lot of the top talent we get, they don’t care where we are, as long as it’s not, like, Alaska. They just really want to work on virtual reality, and there’s no one else doing it like we are.

0
Your rating: None
Original author: 
Peter Bright

AMD

AMD wants to talk about HSA, Heterogeneous Systems Architecture (HSA), its vision for the future of system architectures. To that end, it held a press conference last week to discuss what it's calling "heterogeneous Uniform Memory Access" (hUMA). The company outlined what it was doing, and why, both confirming and reaffirming the things it has been saying for the last couple of years.

The central HSA concept is that systems will have multiple different kinds of processors, connected together and operating as peers. The two main kinds of processors are conventional: versatile CPUs and the more specialized GPUs.

Modern GPUs have enormous parallel arithmetic power, especially floating point arithmetic, but are poorly-suited to single-threaded code with lots of branches. Modern CPUs are well-suited to single-threaded code with lots of branches, but less well-suited to massively parallel number crunching. Splitting workloads between a CPU and a GPU, using each for the workloads it's good at, has driven the development of general purpose GPU (GPGPU) software and development.

Read 21 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Sean Gallagher


The ArxCis-NV DIMM combines DDR3 dynamic memory with a flash memory backup.

Viking Technology

The server world still waits for DDR4, the next generation of dynamic memory, to be ready for prime time. In the meantime, a new set of memory boards from Viking is looking to squeeze more performance out of servers not by providing faster memory, but by making it safer to keep more in memory and less on disk or SSD. Viking Technology has begun supplying dual in-line memory modules that combine DDR3 dynamic memory with NAND flash memory to create non-volatile RAM for servers and storage arrays—modules that don't lose their memory when the systems they're in lose power or shut down.

The ArxCis-NV DIMM, which Viking demonstrated at the Storage Networking Industry Association's SNW Spring conference in Orlando this week, plugs into standard DIMM memory slots in servers and RAID controller cards.  Viking isn't the only player in the non-volatile DIMM game—Micron Technology and AgigA Tech announced their own NVDIMM effort in November—but they're first to market. The modules shipping now to a select group of server manufacturers have 4GB of dynamic RAM and 8GB of NAND memory. Modules with double those figures are planned for later in the year, and modules with 16GB of DRAM and 32GB of NAND are in the works for next year.

The ArxCis can be plugged into existing servers and RAID controllers today as a substitute for battery backed-up (BBU) memory modules. They are even equipped with batteries to power a last-gasp write to NAND memory in the event of a power outage. But the ArxCis is more than a better backup in the event of system failure. Viking's non-volatile DIMMs are primarily aimed at big in-memory computing tasks, such as high-speed in-memory transactional database systems and indices such as those used in search engines and other "hyper-scale" computing applications.  Facebook's "Unicorn" search engine system, for example, keeps massive indices in memory to allow for real-time response to user queries, as does the "type-ahead" feature in Google's search.

Read 2 remaining paragraphs | Comments

0
Your rating: None