Skip navigation

relational database

warning: Creating default object from empty value in /var/www/vhosts/ on line 33.

Jeremy Edberg, the first paid employee at reddit, teaches us a lot about how to create a successful social site in a really good talk he gave at the RAMP conference. Watch it here at Scaling Reddit from 1 Million to 1 Billion–Pitfalls and Lessons.

Jeremy uses a virtue and sin approach. Examples of the mistakes made in scaling reddit are shared and it turns out they did a lot of good stuff too. Somewhat of a shocker is that Jeremy is now a Reliability Architect at Netflix, so we get a little Netflix perspective thrown in for free.

Some of the lessons that stood out most for me: 

Your rating: None
Original author: 
Stack Exchange

Stack Exchange

This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.

abel is in the early stages of developing a closed-source financial app within a niche market. He is hiring his first employees, and he wants to take steps to ensure these new hires don't steal the code and run away. "I foresee disabling USB drives and DVD writers on my development machines," he writes. But will that be enough? Maybe a better question is: will that be too much?

See the original question here.

Read 18 remaining paragraphs | Comments

Your rating: None

The tech unit's sign, autographed by its members.

The reelection of Barack Obama was won by people, not by software. But in a contest as close as last week's election, software may have given the Obama for America organization's people a tiny edge—making them by some measures more efficient, better connected, and more engaged than the competition.

That edge was provided by the work of a group of people unique in the history of presidential politics: Team Tech, a dedicated internal team of technology professionals who operated like an Internet startup, leveraging a combination of open source software, Web services, and cloud computing power. The result was the sort of numbers any startup would consider a success. As Scott VanDenPlas, the head of the Obama technology team's DevOps group, put it in a tweet:

4Gb/s, 10k requests per second, 2,000 nodes, 3 datacenters, 180TB and 8.5 billion requests. Design, deploy, dismantle in 583 days to elect the President. #madops

Read 53 remaining paragraphs | Comments

Your rating: None

Hi, I'm just starting with trying to build web applications and I'm not really familiar with how to store data to make sure something like this scales well.

Consider this pseudo-data, in JSON: { unique id, [array of child ids], [auxiliary data] }

Right now, every request I get requires me to get auxiliary info given an ID. About 3/4 of the time I need to update some of the auxiliary data. About half of the time, I need to query again for the auxiliary data of one of the "child" id's from the array of ids. This makes me want to just use a relational database and generate another query based on the child IDs if I need to traverse the natural graph structure of the data (do very "shallow" searches).

I'm wondering how well this would continue to work if I suddenly decided to do a lot more "depth-first" query patterns (that is, every query would likely be followed by a query to it's child, which has an unpredictable ID), and whether specialized graph databases (not SQL) would give me more scalability in this case. I don't actually know much about how they work but I imagine if there's any reason they exist it's for stuff like this.

Can anyone point me in the right direction? If a single request generates a chain of sequential SELECTs to traverse a graph am I doing it wrong?

submitted by ReallyGoodAdvice
[link] [3 comments]

Your rating: None