Coordinated attacks used to knock websites offline grew meaner and more powerful in the past three months, with an eight-fold increase in the average amount of junk traffic used to take sites down, according to a company that helps customers weather the so-called distributed denial-of-service campaigns.
The average amount of bandwidth used in DDoS attacks mushroomed to an astounding 48.25 gigabits per second in the first quarter, with peaks as high as 130 Gbps, according to Hollywood, Florida-based Prolexic. During the same period last year, bandwidth in the average attack was 6.1 Gbps and in the fourth quarter of last year it was 5.9 Gbps. The average duration of attacks also grew to 34.5 hours, compared with 28.5 hours last year and 32.2 hours during the fourth quarter of 2012. Earlier this month, Prolexic engineers saw an attack that exceeded 160 Gbps, and officials said they wouldn't be surprised if peaks break the 200 Gbps threshold by the end of June.
The spikes are brought on by new attack techniques that Ars first chronicled in October. Rather than using compromised PCs in homes and small offices to flood websites with torrents of traffic, attackers are relying on Web servers, which often have orders of magnitude more bandwidth at their disposal. As Ars reported last week, an ongoing attack on servers running the WordPress blogging application is actively seeking new recruits that can also be harnessed to form never-before-seen botnets to bring still more firepower.
Hover over the image for navigation and full screen controls
ESSAY CONTAINS EXPLICIT CONTENT
Zaida González Ríos
My intention is to critique the traditions and social references of Western culture, as well as use irony in questioning certain canons, such as the idealization of the body in advertising and media, the role of gender, and a consumer based existence due to globalization and individualism in an environment that is marked by an increase in the disposable.
I seek to show something different: that which is not well regarded or accepted, an escape from what we have been taught to “behold and admire.” This is manifested with ordinary models, average people who would not otherwise be photographed for an advertising campaign.
With the inclusion of dead and deformed babies in the photographs, I intend to rescue people that were abandoned without a proper farewell. I want to dignify them, transporting them into a picture, surrounded by objects and symbolism to leave them history so that they do not go unnoticed or ignored. I confront the viewer with the truth, one that weighs on the conscience of agricultural industries, since the indiscriminate use of toxic pesticides every year cause children to be born with and die from physiological and physical deformities. This fact is hidden from society by companies that have economic power in our country.
With the lighting techniques used in the images (black and white pictures painted by hand) and small format, I intend to create a break between the form and substance, softening and dislodging the message.
1977, San Miguel, Santiago de Chile.
Photographer and veterinarian.
Zaida received her degree in commercial photograhy but has since dedicated herself exclusively to personal projects, exhibiting her work in various group and solo exhibitions in Chile.
Her work has been featured internationally in Colombia, Argentina, USA, Belgium, Peru, Spain, Uruguay, Venezuela, Spain, France, Portugal and Mexico.
She currently teaches photography in the Escuela de Comunicaciones Alpes and works as a freelance veterinarian.
She has authored 3 books to date: “Las Novias de Antonio” (La Revista, 2009), ” Recuérdame al morir con mi último latido” (2010) and “Zaida Gonzalez De Guarda” (2013). Her last two books were published independently with the help of her brother, designer Marco Gonzalez.
She was the recipient of four photography scholarships from Fondart (2005, 2008, 2009 and 2011) and was a resident of fine art photography for Nelson Garrido in Valparaiso (2010).
In 2007 and 2011 she was nominated for the Altazor award for her work “Conservatorio Celestial” and “Recuérdame al morir con mi último latido,” respectively.
In 2012 she won the Rodrigo Rojas De Negri award and national recognition in emerging photography.
In 2013 she was awarded a grant from the DIRAC for a residency she completed with the NGO (Organización Nelson Garrido) in Caracas, Venezuela.
In this week’s photos from around New York, the New York Police Department gets a new chief, rides at Coney Island open five months after superstorm Sandy and the legacy of musician Prince is celebrated in SoHo.
CloudFlare's CDN is based on Anycast, a standard defined in the Border Gateway Protocol—the routing protocol that's at the center of how the Internet directs traffic. Anycast is part of how BGP supports the multi-homing of IP addresses, in which multiple routers connect a network to the Internet; through the broadcasts of IP addresses available through a router, other routers determine the shortest path for network traffic to take to reach that destination.
Using Anycast means that CloudFlare makes the servers it fronts appear to be in many places, while only using one IP address. "If you do a traceroute to Metallica.com (a CloudFlare customer), depending on where you are in the world, you would hit a different data center," Prince said. "But you're getting back the same IP address."
That means that as CloudFlare adds more data centers, and those data centers advertise the IP addresses of the websites that are fronted by the service, the Internet's core routers automatically re-map the routes to the IP addresses of the sites. There's no need to do anything special with the Domain Name Service to handle load-balancing of network traffic to sites other than point the hostname for a site at CloudFlare's IP address. It also means that when a specific data center needs to be taken down for an upgrade or maintenance (or gets knocked offline for some other reason), the routes can be adjusted on the fly.
That makes it much harder for distributed denial of service attacks to go after servers behind CloudFlare's CDN network; if they're geographically widespread, the traffic they generate gets spread across all of CloudFlare's data centers—as long as the network connections at each site aren't overcome.
Today, a large collection of Web hosting and service companies announced that they will support Railgun, a compression protocol for dynamic Web content. The list includes the content delivery network and Web security provider CloudFlare, cloud providers Amazon Web Services and Rackspace, and thirty of the world’s biggest Web hosting companies.
Railgun is said to make it possible to double the performance of websites served up through Cloudflare’s global network of data centers. The technology was largely developed in the open-source Go programming language launched by Google; it could significantly change the economics of hosting high-volume websites on Amazon Web Services and other cloud platforms because of the bandwidth savings it provides. It has already cut the bandwidth used by 4Chan and Imgur by half. “We've seen a ~50% reduction in backend transfer for our HTML pages (transfer between our servers and CloudFlare's),” said 4Chan’s Chris Poole in an e-mail exchange with Ars. “And pages definitely load a fair bit snappier when Railgun is enabled, since the roundtrip time for CloudFlare to fetch the page is dramatically reduced. We serve over half a billion pages per month (and billions of API hits), so that all adds up fairly quickly.”
Rapid cache updates
Like most CDNs, CloudFlare uses caching of static content at its data centers to help overcome the speed of light. But prepositioning content on a forward server typically hasn’t helped performance much for dynamic webpages and Web traffic such as AJAX requests and mobile app API calls, which have relatively little in the way of what’s considered static content. That has created a problem for Internet services because of the rise in traffic for mobile devices and dynamic websites.
jjp9999 writes "Recent findings published on Jan. 27 in the journal Nature Neuroscience may inspire you to get some proper sleep. Researchers at UC Berkeley found that REM sleep plays a key role in moving short term memories from the hippocampus (where short-term memories are stored) to the prefrontal cortex (where long-term memories are stored), and that degeneration of the frontal lobe as we grow older may play a key role in forgetfulness. 'What we have discovered is a dysfunctional pathway that helps explain the relationship between brain deterioration, sleep disruption and memory loss as we get older – and with that, a potentially new treatment avenue,' said UC Berkeley sleep researcher Matthew Walker."
Read more of this story at Slashdot.