Skip navigation
Help

firewall

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
r_adams

Moving from physical servers to the "cloud" involves a paradigm shift in thinking. Generally in a physical environment you care about each invididual host; they each have their own static IP, you probably monitor them individually, and if one goes down you have to get it back up ASAP. You might think you can just move this infrastructure to AWS and start getting the benefits of the "cloud" straight away. Unfortunately, it's not quite that easy (believe me, I tried). You need think differently when it comes to AWS, and it's not always obvious what needs to be done.

So, inspired by Sehrope Sarkuni's recent post, here's a collection of AWS tips I wish someone had told me when I was starting out. These are based on things I've learned deploying various applications on AWS both personally and for my day job. Some are just "gotcha"'s to watch out for (and that I fell victim to), some are things I've heard from other people that I ended up implementing and finding useful, but mostly they're just things I've learned the hard way.

0
Your rating: None

Nerval's Lobster writes "For most businesses, data analytics presents an opportunity. But for DARPA, the military agency responsible for developing new technology, so-called 'Big Data' could represent a big threat. DARPA is apparently looking to fund researchers who can 'investigate the national security threat posed by public data available either for purchase or through open sources.' That means developing tools that can evaluate whether a particular public dataset will have a significant impact on national security, as well as blunt the force of that impact if necessary. 'The threat of active data spills and breaches of corporate and government information systems are being addressed by many private, commercial, and government organizations,' reads DARPA's posting on the matter. 'The purpose of this research is to investigate data sources that are readily available for any individual to purchase, mine, and exploit.' As Foreign Policy points out, there's a certain amount of irony in the government soliciting ways to reduce its vulnerability to data exploitation. 'At the time government officials are assuring Americans they have nothing to fear from the National Security Agency poring through their personal records,' the publication wrote, 'the military is worried that Russia or al Qaeda is going to wreak nationwide havoc after combing through people's personal records.'"

0
Your rating: None

An anonymous reader writes "Ralph Langner, the security expert who deciphered how Stuxnet targeted the Siemens PLCs in Iran's Natanz nuclear facility, has come up with a cybersecurity framework for industrial control systems (ICS) that he says is a better fit than the U.S. government's Cyber Security Framework. Langner's Robust ICS Planning and Evaluation, or RIPE, framework takes a different approach to locking down ICS/SCADA plants than the NIST-led one, focusing on security capabilities rather than risk. He hopes it will help influence the final version of the U.S. government's framework."

0
Your rating: None

The intro for yesterday's video interview with Don Marti started out by saying, "Don Marti," says Wikipedia, "is a writer and advocate for free and open source software, writing for LinuxWorld and Linux Today." As we noted, Don has moved on since that description was written. In today's interview he starts by talking about some things venture capitalist Mary Meeker of Kleiner Perkins has said, notably that people only spend 6% of their media-intake time with print, but advertisers spend 23% of their budgets on print ads. To find out why this is, you might want to read a piece Don wrote titled Targeted Advertising Considered Harmful. Or you can just watch today's video -- and if you didn't catch Part One of our video conversation yesterday, you might want to check it out before watching Part 2.

Hide/Show Transcript

0
Your rating: None
Original author: 
Todd Hoff

Now that we have the C10K concurrent connection problem licked, how do we level up and support 10 million concurrent connections? Impossible you say. Nope, systems right now are delivering 10 million concurrent connections using techniques that are as radical as they may be unfamiliar.

To learn how it’s done we turn to Robert Graham, CEO of Errata Security, and his absolutely fantastic talk at Shmoocon 2013 called C10M Defending The Internet At Scale.

Robert has a brilliant way of framing the problem that I’ve never heard of before. He starts with a little bit of history, relating how Unix wasn’t originally designed to be a general server OS, it was designed to be a control system for a telephone network. It was the telephone network that actually transported the data so there was a clean separation between the control plane and the data plane. The problem is we now use Unix servers as part of the data plane, which we shouldn’t do at all. If we were designing a kernel for handling one application per server we would design it very differently than for a multi-user kernel. 

Which is why he says the key is to understand:

  • The kernel isn’t the solution. The kernel is the problem.

Which means:

  • Don’t let the kernel do all the heavy lifting. Take packet handling, memory management, and processor scheduling out of the kernel and put it into the application, where it can be done efficiently. Let Linux handle the control plane and let the the application handle the data plane.

The result will be a system that can handle 10 million concurrent connections with 200 clock cycles for packet handling and 1400 hundred clock cycles for application logic. As a main memory access costs 300 clock cycles it’s key to design in way that minimizes code and cache misses.

With a data plane oriented system you can process 10 million packets per second. With a control plane oriented system you only get 1 million packets per second.

If this seems extreme keep in mind the old saying: scalability is specialization. To do something great you can’t outsource performance to the OS. You have to do it yourself.

Now, let’s learn how Robert creates a system capable of handling 10 million concurrent connections...

0
Your rating: None

Aurich Lawson (after Aliens)

In one of the more audacious and ethically questionable research projects in recent memory, an anonymous hacker built a botnet of more than 420,000 Internet-connected devices and used it to perform one of the most comprehensive surveys ever to measure the insecurity of the global network.

In all, the nine-month scanning project found 420 million IPv4 addresses that responded to probes and 36 million more addresses that had one or more ports open. A large percentage of the unsecured devices bore the hallmarks of broadband modems, network routers, and other devices with embedded operating systems that typically aren't intended to be exposed to the outside world. The researcher found a total of 1.3 billion addresses in use, including 141 million that were behind a firewall and 729 million that returned reverse domain name system records. There were no signs of life from the remaining 2.3 billion IPv4 addresses.

Continually scanning almost 4 billion addresses for nine months is a big job. In true guerilla research fashion, the unknown hacker developed a small scanning program that scoured the Internet for devices that could be logged into using no account credentials at all or the usernames and passwords of either "root" or "admin." When the program encountered unsecured devices, it installed itself on them and used them to conduct additional scans. The viral growth of the botnet allowed it to infect about 100,000 devices within a day of the program's release. The critical mass allowed the hacker to scan the Internet quickly and cheaply. With about 4,000 clients, it could scan one port on all 3.6 billion addresses in a single day. Because the project ran 1,000 unique probes on 742 separate ports, and possibly because the binary was uninstalled each time an infected device was restarted, the hacker commandeered a total of 420,000 devices to perform the survey.

Read 16 remaining paragraphs | Comments

0
Your rating: None

In the North of Sweden, in Lappland, there is a university spinoff company named BehavioSec that decides you are you (or that a person using your computer is not you) by the way you type. Not the speed, but rhythm and style quirks, are what they detect and use for authentication. BehavioSec CEO/CTO Neil Costigan obviously knows far more about this than we do, which is why Tim Lord met with him at the 2013 RSA Conference and had him tell us exactly how BehavioSec's system works. As usual, we've provided both a video and a transcript (There's a small "Show/Hide Transcript" link immediately below the video) so you can either watch or read, whichever you prefer.

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None