Skip navigation
Help

VOIP

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Jon Brodkin


Can Google's QUIC be faster than Mega Man's nemesis, Quick Man?

Josh Miller

Google, as is its wont, is always trying to make the World Wide Web go faster. To that end, Google in 2009 unveiled SPDY, a networking protocol that reduces latency and is now being built into HTTP 2.0. SPDY is now supported by Chrome, Firefox, Opera, and the upcoming Internet Explorer 11.

But SPDY isn't enough. Yesterday, Google released a boatload of information about its next protocol, one that could reshape how the Web routes traffic. QUIC—standing for Quick UDP Internet Connections—was created to reduce the number of round trips data makes as it traverses the Internet in order to load stuff into your browser.

Although it is still in its early stages, Google is going to start testing the protocol on a "small percentage" of Chrome users who use the development or canary versions of the browser—the experimental versions that often contain features not stable enough for everyone. QUIC has been built into these test versions of Chrome and into Google's servers. The client and server implementations are open source, just as Chromium is.

Read 11 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Todd Hoff

Now that we have the C10K concurrent connection problem licked, how do we level up and support 10 million concurrent connections? Impossible you say. Nope, systems right now are delivering 10 million concurrent connections using techniques that are as radical as they may be unfamiliar.

To learn how it’s done we turn to Robert Graham, CEO of Errata Security, and his absolutely fantastic talk at Shmoocon 2013 called C10M Defending The Internet At Scale.

Robert has a brilliant way of framing the problem that I’ve never heard of before. He starts with a little bit of history, relating how Unix wasn’t originally designed to be a general server OS, it was designed to be a control system for a telephone network. It was the telephone network that actually transported the data so there was a clean separation between the control plane and the data plane. The problem is we now use Unix servers as part of the data plane, which we shouldn’t do at all. If we were designing a kernel for handling one application per server we would design it very differently than for a multi-user kernel. 

Which is why he says the key is to understand:

  • The kernel isn’t the solution. The kernel is the problem.

Which means:

  • Don’t let the kernel do all the heavy lifting. Take packet handling, memory management, and processor scheduling out of the kernel and put it into the application, where it can be done efficiently. Let Linux handle the control plane and let the the application handle the data plane.

The result will be a system that can handle 10 million concurrent connections with 200 clock cycles for packet handling and 1400 hundred clock cycles for application logic. As a main memory access costs 300 clock cycles it’s key to design in way that minimizes code and cache misses.

With a data plane oriented system you can process 10 million packets per second. With a control plane oriented system you only get 1 million packets per second.

If this seems extreme keep in mind the old saying: scalability is specialization. To do something great you can’t outsource performance to the OS. You have to do it yourself.

Now, let’s learn how Robert creates a system capable of handling 10 million concurrent connections...

0
Your rating: None

After disasters (or to minimize expensive data use generally, and take advantage of available Wi-Fi), bypassing the cell network is useful. But it's not something that handset makers bake into their phones. colinneagle writes with information on a project that tries to sidestep a dependence on the cellular carriers, if there is Wi-Fi near enough for at least some users: "The Smart Phone Ad-Hoc Networks (SPAN) project reconfigures the onboard Wi-Fi chip of a smartphone to act as a Wi-Fi router with other nearby similarly configured smartphones, creating an ad-hoc mesh network. These smartphones can then communicate with one another without an operational carrier network. SPAN intercepts all communications at the Global Handset Proxy so applications such as VoIP, Twitter, email etc., work normally."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

If you’ve ever had to dial in to a videoconference for work, you know what a painful experience it can be. Staring at the backs of people’s heads and struggling to see presentation materials doesn’t make for a fun time. But one company is hoping to change that by thinking outside the box — literally.

Today, Altia Systems, a start-up based in Cupertino, Calif., introduced a new video camera called PanaCast. It takes videoconferencing beyond a stationary, rectangular screen by providing a real-time panoramic view of the room and giving users the ability to pan and zoom the scene from their computer or mobile device.

“We felt that if we could bring a new experience that is as simple as the normal human experience of talking in real time, it could be really powerful for users,” said Aurangzeb Khan, co-founder and CEO of Altia Sytems, in an interview with AllThingsD.

PanaCast uses a system of six cameras to capture HD video at 60 frames, and a custom-developed video processor that synchronizes and stitches all the images in real time to create a single, 200-degree panoramic view of the room. (PanaCast also runs the Linux operating system and features a dual-core ARM 11 processor.)

Altia’s server then uses a low-latency encoding process that allows you to stream the video over a cellular or Wi-Fi connection, unlike some videoconference systems that require dedicated bandwidth.

Remote participants can view video using the company’s Mac or Windows app or on their iPhone or Android devices. On mobile devices, you can use familiar touch gestures, such as pinch-to-zoom and swiping left or right, to zoom in on notes written on a whiteboard or to pan over to a speaker on the other side of the room.

“The experience you get with PanaCast is much more natural than current videoconferencing systems,” said Khan. “You get to interact with it in real time, and it makes a difference because you’re not a passive viewer anymore. You’re engaged in the discussion and how you want to participate in the discussion.”

I got a hands-on demo of the device last week (the company was originally scheduled to show PanaCast at our D: Dive into Mobile conference, which was postponed due to Hurricane Sandy), and I was actually surprised at what a difference the panoramic view made. It made for a better visual experience, and it was also helpful to see who was saying what, instead of hearing a faceless voice from one corner of the room.

Occasionally, I noticed some lag in the video, but the panning and zooming motions were very smooth, and worked well. One thing to note is that PanaCast does not have a built-in microphone.

Conference organizers will still need to use either a speakerphone or Polycom system. The PanaCast apps will have integrated VoIP audio, so participants can listen and talk using the app.

Altia says another benefit to its PanaCast system is cost. The company did not reveal exact pricing, but did say that it would be less than $700. Altia is launching PanaCast on Kickstarter, and the first 25 pledges of $399 will get a camera, with an estimated ship date of January.

The apps are free, and there is no subscription fee for up to two simultaneous remote participants. The company has a fundraising goal of $15,000 by Jan. 1.

Altia Systems was co-founded by Khan, CMO Lars Herlitz and CTO Atif Sarwari. The company has received $3 million in series A funding from Lanza TechVentures and private investors Dado and Rey Banatao.

0
Your rating: None

Hi, I'm working on the National Vulnerability Database (NVD). I want to categorise the vulnerable software by category. I already have the categories and a good training set to feed into a machine learning algorithm.

The original idea was to use the description of the vulnerability in NVD to categorise the software, but this won't obviously work (because it doesn't describe the software).

Then we thought to download the first paragraph of the Wikipedia entry for that software. This works only 10% of the time, as many entries do not match any page. This is an example of a page that cannot load Further manual google queries seem to identify that software as a VOIP server. In some other cases, e.g. for the software Swift, the returned page is definitely not related to the software, and in the disambiguation page#Software_and_information_technology) it is not even clear which entry should be the one of interest.

Do you have suggestions to mitigate this problem? More reliable software-related databases other than wikipedia? Better ways to query the dataset instead of feeding the bare software name provided by NVD (e.g. up-ux_v, vendor:Nec)? Ways to include the vendor in the query, so to make the results more reliable?

Ever faced a problem like that?

Thanks!

submitted by mailor
[link] [1 comment]

0
Your rating: None

About a year and a half ago, I researched the state of routers: about as unsexy as it gets but essential to the stability, reliability, and security of your Internet connection. My conclusion?

This is boring old plain vanilla commodity router hardware, but when combined with an open source firmware, it is a massive improvement over my three year old, proprietary high(ish) end router. The magic router formula these days is a combination of commodity hardware and open-source firmware. I'm so enamored of this one-two punch combo, in fact, I might even say it represents the future. Not just of the everyday workhorse routers we all need to access the Internet – but the future of all commodity hardware.

I felt a little bad about that post, because I quickly migrated from the DD-WRT open source firmware to OpenWRT and then finally settled on Tomato. I guess that's open source, too many choices with nobody to really tell you what's going to work reliably on your particular hardware. But the good news is that I've been running Tomato quite happily with total stability for about a year now – primarily because it is gloriously simple, but also because it has the most functional quality of service (QoS) implementation.

Tomato-qos

Why does functional Quality of Service matter so very much in a router? Unless you have an Internet connection that's only used by your grandmother to visit her church's website on Sundays, QoS is the difference between a responsive Internet and one that's brutally dog slow.

Ever sat in an internet shop, a hotel room or lobby, a local hotspot, and wondered why you can't access your email? Unknown to you, the guy in the next room or at the next table is hogging the internet bandwidth to download the Lord Of The Rings Special Extended Edition in 1080p HDTV format. You're screwed - because the hotspot router does not have an effective QoS system. In fact, I haven't come across a shop or an apartment block locally that has any QoS system in use at all. Most residents are not particularly happy with the service they [usually] pay for.

When I switched from DD-WRT and OpenWRT to Tomato, I had to buy a different router, because Tomato only supports certain router hardware, primarily Broadcom. The almost universal recommendation was the Asus RT-N16, so that's what I went with.


Asus RT-N16

And it is still an excellent choice. If you just want a modern, workhorse single band wireless N router that won't break the bank, but has plenty of power and memory to run Tomato, definitely try the Asus RT-N16. It's currently available for under $80 (after $10 rebate). Once you get Tomato on there, you've got a fine combination of hardware and software. Take it from this SmallNetBuilder user review:

I'm a semigeek. Some of the stuff on this site confuses me. But I figured out enough to get this router and install Tomato USB. Great combination. Have not had any problems with the router. Love all the features that Tomato gives me. Like blocking my son's iPod after 7 PM. Blocking certain websites. Yeah, I know you can do that with other routers but Tomato made it easy. Also love the QoS features. Netflix devices get highest bandwidth while my wife's bittorrent gets low.

Review was too heavily slanted against the Asus software, which I agree is crap. I bought the router for its hardware specs. Large memory. Fast processor. Gigabyte lan. 2 USB ports.

What's not to love? Well, the dual band thing, mainly. If you want a truly top of the line router with incredible range, and simultaneous dual band 2.4 GHz and 5 GHz performance bragging rights, fortunately there's the Asus RT-N66U.

Asus RT-N66U

This is, currently at least, the state of the art in routers. It has a faster CPU and twice the memory (256 MB) of the RT-N16. But at $190 it is also over twice the price. Judge for yourself in the SmallNetBuilder review:

As good as the RT-66U is, our wireless performance results once again show that no router is good in every mode that we test. But that said, the Dark Knight clearly outperformed both the NETGEAR WNDR4500 and Cisco Linksys E4200V2 in most of our two and three-stream tests. And it's the only router in recent memory able to reach to our worst-case/lowest-signal test location on the 5 GHz band, albeit with barely-usable throughput. Still, this is an accomplishment in itself.

If you're going to spend close to $200 for a wireless router, you should get a lot for your money. The Dark Knight seems to deliver wireless performance to justify its high price and has routing speed fast enough to handle any service a consumer is likely to have, even our friends in Europe and Asia.

Its only weakness? Take a guess. Oh wait, no need to guess, it's the same "weakness" the RT-N16 shared, the sketchy Asus firmware it ships with out of the box. That's why we get our Tomato on, people! There is complete and mature support for the RT-N66U in Tomato; for a walkthrough on how to get it installed (don't be shy, it's not hard) Check out Shadow Andy's TomatoUSB firmware flashing guide.

Does having nice router hardware with a current open source firmware matter? Well, if your livelihood depends on the Internet like mine does, then I certainly think so.

Internet-serious-business

At the very least, if you or someone you love is also an Internet fan and hasn't given any particular thought to what router they use, maybe it's time to start checking into that. Now if you'll excuse me, I'm going to go donate to the Tomato project.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.

0
Your rating: None

Perst embedded database features and benefits

  • Object-oriented. Perst stores data directly in Java and .NET objects, eliminating the translation required for storage in relational and object-relational databases. This boosts run-time performance.
  • Compact. Perst’s core consists of only five thousand lines of code. The small footprint imposes minimal demands on system resources.
  • Fast. In the TestIndex and PolePosition  benchmarks, Perst displays one of its strongest features: its significant performance advantage over Java and .NET embedded database alternatives.
  • Reliable. Perst supports transactions with the ACID (Atomic, Consistent, Isolated and Durable) properties, and requires no end-user administration.
  • Rich in development tools. The Perst API is flexible and easy-to-use. The breadth of Perst’s specialized collection classes is unparalleled. These include a classic B-Tree implementation; R-tree indexes for spatial data representation; database containers optimized for memory-only access, and much more.
  • Transparent persistence. Perst is distinguished by its ease in working with Java and C# objects, and suitability for aspect-oriented programming with tools such as AspectJ and JAssist. The result is greater efficiency in coding.
  • Source code available. With free, available source code, nothing in Perst is hidden, and the developer gains complete control of the application and its interaction with the database.
  • Advanced capabilities. Perst’s extras include garbage collection, schema evolution, a “wrapper” that provides a SQL-like database interface (SubSQL), XML import/export, database replication, support for large databases, and more.

 

Perst Design Principles

Perst's goal is to provide developers in Java and C# with a convenient and powerful mechanism to deal with large volumes of data. Perst's design principles include the following:

  • Persistent objects should be accessed in almost the same way as transient objects (transparent persistence)
  • A database engine should be able to efficiently manage much more data than can fit in main memory
  • No specialized preprocessors, enhancers, compilers, virtual machines or other tools should be required to use the database or develop applications with it.
0
Your rating: None