Skip navigation
Help

SAN

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

This is a guest post written by Claude Johnson, a Lead Site Reliability Engineer at salesforce.com.

The following is an architectural overview of salesforce.com’s core platform and applications. Other systems such as Heroku's Dyno architecture or the subsystems of other products such as work.com and do.com are specifically not covered by this material, although database.com is. The idea is to share with the technology community some insight about how salesforce.com does what it does. Any mistakes or omissions are mine.

This is by no means comprehensive but if there is interest, the author would be happy to tackle other areas of how salesforce.com works. Salesforce.com is interested in being more open with the technology communities that we have not previously interacted with. Here’s to the start of “Opening the Kimono” about how we work.

Since 1999, salesforce.com has been singularly focused on building technologies for business that are delivered over the Internet, displacing traditional enterprise software. Our customers pay via monthly subscription to access our services anywhere, anytime through a web browser. We hope this exploration of the core salesforce.com architecture will be the first of many contributions to the community.

0
Your rating: None

Aurich Lawson / Thinkstock

The corporate data center is undergoing a major transformation the likes of which haven't been seen since Intel-based servers started replacing mainframes decades ago. It isn't just the server platform: the entire infrastructure from top to bottom is seeing major changes as applications migrate to private and public clouds, networks get faster, and virtualization becomes the norm.

All of this means tomorrow's data center is going to look very different from today's. Processors, systems, and storage are getting better integrated, more virtualized, and more capable at making use of greater networking and Internet bandwidth. At the heart of these changes are major advances in networking. We're going to examine six specific trends driving the evolution of the next-generation data center and discover what both IT insiders and end-user departments outside of IT need to do to prepare for these changes.

Beyond 10Gb networks

Network connections are getting faster to be sure. Today it's common to find 10-gigabit Ethernet (GbE) connections to some large servers. But even 10GbE isn't fast enough for data centers that are heavily virtualized or handling large-scale streaming audio/video applications. As your population of virtual servers increases, you need faster networks to handle the higher information loads required to operate. Starting up a new virtual server might save you from buying a physical server, but it doesn't lessen the data traffic over the network—in fact, depending on how your virtualization infrastructure works, a virtual server can impact the network far more than a physical one. And as more audio and video applications are used by ordinary enterprises in common business situations, the file sizes balloon too. This results in multi-gigabyte files that can quickly fill up your pipes—even the big 10Gb internal pipes that make up your data center's LAN.

Read 34 remaining paragraphs | Comments

0
Your rating: None

New submitter jyujin writes "Ever wonder how long your SSD will last? It's funny how bad people are at estimating just how long '100,000 writes' are going to take when spread over a device that spans several thousand of those blocks over several gigabytes of memory. It obviously gets far worse with newer flash memory that is able to withstand a whopping million writes per cell. So yeah, let's crunch some numbers and fix that misconception. Spoiler: even at the maximum SATA 3.0 link speeds, you'd still find yourself waiting several months or even years for that SSD to start dying on you."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

This SlideShowPro photo gallery requires the Flash Player plugin and a web browser with JavaScript enabled.

Hover over the image for navigation and full screen controls
ESSAY CONTAINS EXPLICIT CONTENT

Maki Maki

Welcome 2 My Room

play this essay

 

Internet is reachable by millions of people each second. They can communicate with each other, and sometimes very private things are told and shown on internet blogs through photos, videos, writings. Although initially it was not intentioned, this is what I experienced with this series called “Welcome 2 My Room”.

Usually, to take a photograph, you have to be physically in front of the person you want to shoot with your camera. It all changed on the internet with chats, webcams and other ways to meet virtually the image of people on the screen of your computer. In this photo work I experienced a new way to take photographs by taking, with an analog polaroid camera, portraits on my computer screen, chatting live with sex workers through their webcams.

The starting point of this series of photo portraits was the discovery of a website in the Philippines. A peep show with chat and webcam. Girls and boys working at home alone, or several persons together in so called “studios”. Omnipresence of precarity. At that time they were more than 300, now there are twice as much…

Sometimes links are created, other times it’s “just business”. All those gazes, those stories intersecting, including mine…

I started taking pictures of them with my old polaroid camera on my computer screen. I used to shoot people I meet, so why not do it by computer screen interposed. Sometimes the exchanges and discussions are intense. Laying bare the feelings, the lives, the bodies… Sincerity encounters with cunning. But of course there’s the money. They will do anything to make you pay. But sometimes on the spot of our conversations, emotion overwhelms… Tears of blood…

Finally thousands of polaroid snapshots (and also some black and white roll films) were taken in my bedroom in front of my computer screen during the highlights of our conversations or private shows…Trying to give a face to sex… As always image rule as a unique weapon… We play with it, we come with it …

 

Bio

Born and living in Marseille (France) since 1964.

He studied photography at the beginning of the 80s and is into photography since then. In 2000 he turns towards a more experimental and intimate photography.

He’s participated in solo and group photo exhibitions in Europe and Japan, and been published in exhibition catalogs, record covers, art magazines, books…

Actually he’s working on a series about Japan called “Japan Somewhere”. Some photos of this series will be published in December 2012 inside the photobook “MONO” about contemporary black and white photographers, edited by Gommabooks together with other photographers such as Antoine d’Agata, Daido Moriyama, Anders Petersen, Roger Ballen, Trent Parke…

Since 2007 he is founding member of the Collective of European photographers SMOKE.

In 2010 he created Média Immédiat Publishing, a book collection actually composed of 9 mini photobooks including photographers like Morten Andersen, Ed Templeton, Onaka Koji, Jukka Onnela, Daisuke Ichiba.

 

0
Your rating: None

snydeq writes "Deep End's Paul Venezia sees few business IT situations that could make good use of full cloud storage services, outside of startups. 'As IT continues in a zigzag path of figuring out what to do with this "cloud" stuff, it seems that some companies are getting ahead of themselves. In particular, the concept of outsourcing storage to a cloud provider puzzles me. I can see some benefits in other cloud services (though I still find the trust aspect difficult to reconcile), but full-on cloud storage offerings don't make sense outside of some rare circumstances.'"


Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

New submitter Lashat writes "According to Ars Technica, 'A new survey seems to show that VMware's iron grip on the enterprise virtualization market is loosening, with 38 percent of businesses planning to switch vendors within the next year due to licensing models and the robustness of competing hypervisors.' What do IT-savvy Slashdotters have to say about moving away from one of the more stable and feature rich VM architectures available?"

Read more of this story at Slashdot.

0
Your rating: None

rain cloudWe recently moved from Amazon on-demand “cloud” hosting to our own dedicated servers.  It took about three months to order and set up the new servers versus a few minutes to get servers on Amazon.  However, the new servers are 2.5X faster and so far, more reliable.

We love Amazon for fostering development and innovation.  Cloud computing systems are great at getting you new servers.  This helps a lot when you are trying to innovate because you can quickly get new servers for your new services. If you are in a phase of trying new things, cloud hosts will help you.

Cloud hosts also help a lot when you are testing.  It’s amazing how many servers it takes to run an Internet service.  You don’t just need production systems.  You need failover systems.  You need development systems.  You need staging/QA systems.  You will need a lot of servers, and you may need to go to a cloud host.

However, there are problems with cloud hosting that emerge if you need high data throughput.  The problems aren’t with the servers but instead, with storage and networking.  To see why, let’s look at how a cloud architecture differs from a local box architecture.  You can’t directly attach each storage location to the box that it servers.  You have to use network attached storage.

DEDICATED ARCHITECTURE:  Server Box -> bus or lan or SAN -> Storage

CLOUD ARCHITECTURE:  Server Box -> Mesh network -> Storage cluster with network replication

1) Underlying problem:  Big data, slow networks

Network attached storage becomes a problem because there is a fundamental mismatch between networking and storage.  Storage capacity almost doubles every year.  Networking speed grows by a factor of ten about every 10 years – 100 times lower.  The net result is that storage gets much bigger than network capacity, and it takes a really long time to copy data over a network.  I first heard this trend analyzed by John Landry, who called it “Landry’s law.”  In my experience, this problem has gotten to the point where even sneakernet (putting on sneakers and carrying data on big storage media) cannot save us because after you lace up your sneakers, you have to copy the data OVER A NETWORK to get it onto the storage media and then copy it again to get it off.  When we replicated the Assembla data to the new datacenter, we realized that it would be slower to do those two copies than to replicate over  the Internet, which is slower than sneakernet for long distance transport but only requires one local network copy.

2) Mesh network inconsistency

The Internet was designed as a hub and spoke network, and that part of it works great.  When you send a packet up from your spoke, it travels a predictable route through various hubs to its destination.  When you plug dedicated servers into the Internet, you plug a spoke into the hub, and it works in the traditional way.  The IP network inside a cloud datacenter is more of a “mesh.”  Packets can take a variety of routes between the servers and the storage.  The mesh component is vulnerable to both packet loss and capacity problems.  I can’t present any technical reason why this is true, but in our observation, it is true.  We have seen two different issues:

* Slowdowns and brownouts:  This is a problem at both Amazon and GoGrid, but it is easier to see at Amazon.  Their network, and consequently their storage, has variable performance, with slow periods that I call “brownouts.”

* Packet loss:  This is related to the capacity problems as routers will throw away packets when they are overloaded.  However, the source of the packet loss seems to be much harder to debug in a mesh network.  We see these problems on the GoGrid network, and their attempts to diagnose it are often ineffectual.

3) Replication stoppages

The second goal of cloud computing is to provide high availability. The first goal is to never lose data.  When there is a failure in the storage cluster, the first goal (don’t lose data) kicks in and stomps on the second goal (high availability).  Systems will stop accepting new data and make sure that old data gets replicated.  Network attached storage will typically start replicating data to a new node.  It may either refuse new data until it can be replicated reliably, or it will absorb all network capacity and block normal operation in the mesh.

Note that in a large complex systems, variations in both network speed and storage capacity will follow a power law distribution.  This happens "chaotically."  When the variation reaches a certain low level of performance, the system fails because of the replication problem. 

I think that we should be able to predict the rate of major failures by observing the smaller variations and extrapolating them with a power law.  Amazon had  a major outage in April 2011. Throughout the previous 18 months, they had performance brownouts, and I think the frequency of one could be predicted from the other.

CONCLUSION

So, if your application is storage intensive and high availability, you must either:

1) Design it so that lots of replication is running all of the time, and you can afford to lose access to any specific storage node.  This places limits on the speed that your application can absorb data because you need to reserve a big percentage of scarce network capacity for replication.  So, you will have only a small percentage of network capacity available to for absorbing external data.  However, it is the required architecture for very large systems.  It  works well if you have a high ratio of output to input, since output just uses the replicated data rather than adding to it.

If you try this replication strategy, you will need to deal with two engineering issues.  First, you will think through replication specifically for your application.  There are many new database architectures that make this tradeoff in various ways.  Each has strengths and weaknesses, so if you design a distributed system, you will probably end up using several of these new architectures.  Second, you will need to distribute across multiple mesh network locations. It's not enough just to have several places to get your data, in the same network neighborhood.  If there is a problem, the entire mesh will jam up.  Ask about this.

2) Use local storage

0
Your rating: None

Founded by game industry veterans Brenda Brathwaite and John Romero, and our team is the most experienced game development team in the social space. If you genuinely love making and playing games, we are the right home for you.

0
Your rating: None