Skip navigation
Help

analytics tools

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Collusion for Chrome

Disconnect, the team behind privacy extensions like Facebook, Twitter, and Google Disconnect, has traditionally focused on stopping sites from sending your data back to social networks and other collection entities. These sites, however, aren't the only ones getting information from your browsing, and a new Disconnect tool, "Collusion for Chrome," will chart a map of where exactly your clicks are going.

That name ought to sound familiar — it's the same as an experimental Firefox extension that Mozilla created several weeks ago. On Firefox, Collusion opens a new, almost blank tab. As you browse, the tab adds a circle for each site, then sniffs out where that data is going. Within a few clicks, you're likely to have a tangled web linked...

Continue reading…

0
Your rating: None

fistful

Editor’s note: Jay Fulcher is CEO of video technology company Ooyala. This is a follow-up to his columns “Fear And Loathing In Online Video” and “One Screen To Rule Them All“. Follow him on Twitter @jbfulcher.

The rise of smart, multi-screen streaming media is fundamentally changing the TV experience. This year, for the first time ever, Americans will watch more movies over the Internet than on physical media like DVD and Blu-ray. Ooyala’s Video Index Report found that non-desktop video plays doubled in the fourth quarter of 2011. Tablet sales continue to explode. People now spend more time on Xbox Live streaming movies and TV shows than playing video games. And consumer electronics manufacturers are gearing up to ship 125 million Smart TVs in 2014. Simply put, TV is no longer constrained to a single box, a single screen, or a single UI.

Smart networks, broadcasters, studios and service providers recognize that there’s real money to be made as TV moves into the information age. People are not only watching more movies and TV shows online, they are paying for access to premium video content. Recent studies reveal that over half of American tablet owners paid to watch a movie in Q4 2011 and more than 40% paid for TV content. These are strong signs that we’ve come a long way from Jeff Zucker’s “digital pennies” remark back in 2008.

To make the most digital dollars, new TV technologies should securely deliver media to viewers on their terms. Audiences today have personal, portable ways to consume content. There are more screens, platforms and devices to display their favorite shows, and more ways than ever to rent, purchase, gift and download video content. It is an exciting time for both TV viewers and TV content providers.

Innovation is a tricky business, however, and change can be hard. There are bound to be a few missteps and failures as we invent the next generation of TV. This isn’t a new phenomenon. For every VHS recorder there is a Betamax; for every DVD, a Laserdisc. But there will also be key victories and new revenue streams as media and technology combine to create the TV experience of tomorrow.

Here’s how forward-thinking media companies will profit from the new TV.

Big Data & Analytics

More than a buzzword, Big Data is changing the way we look at information — and the world around us. The ability to quickly extract actionable insights from vast sets of data has already become a business imperative in some sectors. This trend can only grow. Corporations, governments, and non-governmental organizations will all leverage distributed computing to gain insights into their operations and their constituencies and maximize efficiencies.

Big Data and analytics will become mission critical for major media companies as TV moves to IP delivery. Firms that fail to invest in data-driven solutions will be at a severe disadvantage in the marketplace. Putting analytics tools in place to collect and analyze key metrics enables video publishers to see how people interact with their content — and understand where and why it’s underperforming (something that was impossible before). These insights will inform critical business decisions that impact audiences and drive revenue.

Intelligent Monetization

As we all know, the easiest way to make more money in media is to sell more advertising. But simply inserting more pre-roll ads into a video stream, for example, quickly falls prey to the law of diminishing marginal returns. An initial uptick in revenue is followed by a substantial dropoff in ad completion rates, as viewers quickly grow weary of the oversupply of irrelevant ad messages.

Smart monetization strategies go hand-in-hand with analytics. With the right tools in place, video publishers can analyze how variables like ad load (the number of ads served per video) and ad placement (where ads are inserted within the video) impact viewer engagement. It’s even possible to find the optimal rental price for, say, a feature-length movie. And soon it will be commonplace to match ads to viewers based on social graph interests, location, device type, and viewing history.

Smart video publishers will use analytics to simultaneously accomplish two somewhat conflicting goals: (1) maximize digital revenue, and (2) create and/or maintain an optimal viewing experience for their viewers.

Personalized Content

A streaming media strategy based on Big Data computing, powerful analytics and smart monetization results in a personalized viewing experience across all connected screens. Content producers and providers will attract and retain more viewers when they deliver highly relevant content to their viewers, and presented in a way the viewer prefers.

Insights derived from vast data collection ensures that the right content is delivered to the right viewer at the right time. The future of personalized television is geo-targeted, interactive content. Viewers who opt to share data will receive a better experience: location-specific ads, augmented reality media experiences, interactive games and content targeted for their viewing history, network and device. Content publishers will also tap into social networks to deliver meaningful content that is informed by viewer interests. As social media continues to evolve, expect video to play a bigger role in how we relate to one another online.

In Sum…

The TV of tomorrow will be smart. It will understand who is watching, where they are, and what shows they enjoy. The end result will be a more personal TV experience that spans multiple screens and locations.

TV is changing quickly. There is a real need for companies to recognize and get out ahead of this change. With the right tools (like those offered by my company Ooyala), fistfuls of digital dollars are there for the taking.

0
Your rating: None

Routing is the process of selecting paths in a network along which to send network traffic. Routing is performed for many kinds of networks, including the telephone network (Circuit switching), electronic data networks (such as the Internet), and transportation networks. This article is concerned primarily with routing in electronic data networks using packet switching technology.

In packet switching networks, routing directs packet forwarding, the transit of logically addressed packets from their source toward their ultimate destination through intermediate nodes, typically hardware devices called routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the router's memory, is very important for efficient routing. Most routing algorithms use only one network path at a time, but multipath routing techniques enable the use of multiple alternative paths.

Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Because structured addresses allow a single routing table entry to represent the route to a group of devices, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging) in large networks, and has become the dominant form of addressing on the Internet, though bridging is still widely used within localized environments.

Delivery semantics

Routing schemes

Cast.svg

anycast

Anycast.svg

broadcast

Broadcast.svg

multicast

Multicast.svg

unicast

Unicast.svg

geocast

Geocast.svg

Routing schemes differ in their delivery semantics:

  • unicast delivers a message to a single specific node;
  • broadcast delivers a message to all nodes in the network;
  • multicast delivers a message to a group of nodes that have expressed interest in receiving the message;
  • anycast delivers a message to any one out of a group of nodes, typically the one nearest to the source.
  • geocast delivers a message to a geographic area

Unicast is the dominant form of message delivery on the Internet, and this article focuses on unicast routing algorithms.

Topology distribution

In a practice known as static routing (or non-adaptive routing), small networks may use manually configured routing tables. Larger networks have complex topologies that can change rapidly, making the manual construction of routing tables unfeasible. Nevertheless, most of the public switched telephone network (PSTN) uses pre-computed routing tables, with fallback routes if the most direct route becomes blocked (see routing in the PSTN). Adaptive routing, or dynamic routing, attempts to solve this problem by constructing routing tables automatically, based on information carried by routing protocols, and allowing the network to act nearly autonomously in avoiding network failures and blockages.

Examples of adaptive-routing algorithms are the Routing Information Protocol (RIP) and the Open-Shortest-Path-First protocol (OSPF). Adaptive routing dominates the Internet. However, the configuration of the routing protocols often requires a skilled touch; networking technology has not developed to the point of the complete automation of routing.[citation needed]

Distance vector algorithms

Main article: Distance-vector routing protocol

Distance vector algorithms use the Bellman-Ford algorithm. This approach assigns a number, the cost, to each of the links between each node in the network. Nodes will send information from point A to point B via the path that results in the lowest total cost (i.e. the sum of the costs of the links between the nodes used).

The algorithm operates in a very simple manner. When a node first starts, it only knows of its immediate neighbours, and the direct cost involved in reaching them. (This information, the list of destinations, the total cost to each, and the next hop to send data to get there, makes up the routing table, or distance table.) Each node, on a regular basis, sends to each neighbour its own current idea of the total cost to get to all the destinations it knows of. The neighbouring node(s) examine this information, and compare it to what they already 'know'; anything which represents an improvement on what they already have, they insert in their own routing table(s). Over time, all the nodes in the network will discover the best next hop for all destinations, and the best total cost.

When one of the nodes involved goes down, those nodes which used it as their next hop for certain destinations discard those entries, and create new routing-table information. They then pass this information to all adjacent nodes, which then repeat the process. Eventually all the nodes in the network receive the updated information, and will then discover new paths to all the destinations which they can still "reach".

Link-state algorithms

Main article: Link-state routing protocol

When applying link-state algorithms, each node uses as its fundamental data a map of the network in the form of a graph. To produce this, each node floods the entire network with information about what other nodes it can connect to, and each node then independently assembles this information into a map. Using this map, each router then independently determines the least-cost path from itself to every other node using a standard shortest paths algorithm such as Dijkstra's algorithm. The result is a tree rooted at the current node such that the path through the tree from the root to any other node is the least-cost path to that node. This tree then serves to construct the routing table, which specifies the best next hop to get from the current node to any other node.

Optimised Link State Routing algorithm

Main article: Optimized Link State Routing Protocol

A link-state routing algorithm optimised for mobile ad-hoc networks is the Optimised Link State Routing Protocol (OLSR).[1] OLSR is proactive; it uses Hello and Topology Control (TC) messages to discover and disseminate link state information through the mobile ad-hoc network. Using Hello messages, each node discovers 2-hop neighbor information and elects a set of multipoint relays (MPRs). MPRs distinguish OLSR from other link state routing protocols.

Path vector protocol

Main article: Path vector protocol

Distance vector and link state routing are both intra-domain routing protocols. They are used inside an autonomous system, but not between autonomous systems. Both of these routing protocols become intractable in large networks and cannot be used in Inter-domain routing. Distance vector routing is subject to instability if there are more than a few hops in the domain. Link state routing needs huge amount of resources to calculate routing tables. It also creates heavy traffic due to flooding.

Path vector routing is used for inter-domain routing. It is similar to distance vector routing. In path vector routing we assume there is one node (there can be many) in each autonomous system which acts on behalf of the entire autonomous system. This node is called the speaker node. The speaker node creates a routing table and advertises it to neighboring speaker nodes in neighboring autonomous systems. The idea is the same as distance vector routing except that only speaker nodes in each autonomous system can communicate with each other. The speaker node advertises the path, not the metric of the nodes, in its autonomous system or other autonomous systems. Path vector routing is discussed in RFC 1322; the path vector routing algorithm is somewhat similar to the distance vector algorithm in the sense that each border router advertises the destinations it can reach to its neighboring router. However, instead of advertising networks in terms of a destination and the distance to that destination, networks are advertised as destination addresses and path descriptions to reach those destinations. A route is defined as a pairing between a destination and the attributes of the path to that destination, thus the name, path vector routing, where the routers receive a vector that contains paths to a set of destinations. The path, expressed in terms of the domains (or confederations) traversed so far, is carried in a special path attribute that records the sequence of routing domains through which the reachability information has passed.

Comparison of routing algorithms

Distance-vector routing protocols are simple and efficient in small networks and require little, if any, management. However, traditional distance-vector algorithms have poor convergence properties due to the count-to-infinity problem.

This has led to the development of more complex but more scalable algorithms for use in large networks. Interior routing mostly uses link-state routing protocols such as OSPF and IS-IS.

A more recent development is that of loop-free distance-vector protocols (e.g., EIGRP). Loop-free distance-vector protocols are as robust and manageable as naive distance-vector protocols, but avoid counting to infinity, and have good worst-case convergence times.

Path selection

Path selection involves applying a routing metric to multiple routes, in order to select (or predict) the best route.

In the case of computer networking, the metric is computed by a routing algorithm, and can cover such information as bandwidth, network delay, hop count, path cost, load, MTU, reliability, and communication cost (see e.g. this survey for a list of proposed routing metrics). The routing table stores only the best possible routes, while link-state or topological databases may store all other information as well.

Because a routing metric is specific to a given routing protocol, multi-protocol routers must use some external heuristic in order to select between routes learned from different routing protocols. Cisco's routers, for example, attribute a value known as the administrative distance to each route, where smaller administrative distances indicate routes learned from a supposedly more reliable protocol.

A local network administrator, in special cases, can set up host-specific routes to a particular machine which provides more control over network usage, permits testing and better overall security. This can come in handy when required to debug network connections or routing tables.

Multiple agents

In some networks, routing is complicated by the fact that no single entity is responsible for selecting paths: instead, multiple entities are involved in selecting paths or even parts of a single path. Complications or inefficiency can result if these entities choose paths to optimize their own objectives, which may conflict with the objectives of other participants.

A classic example involves traffic in a road system, in which each driver picks a path which minimizes their own travel time. With such routing, the equilibrium routes can be longer than optimal for all drivers. In particular, Braess paradox shows that adding a new road can lengthen travel times for all drivers.

In another model, for example used for routing automated guided vehicles (AGVs) on a terminal, reservations are made for each vehicle to prevent simultaneous use of the same part of an infrastructure. This approach is also referred to as context-aware routing.[2]

The Internet is partitioned into autonomous systems (ASs) such as internet service providers (ISPs), each of which has control over routes involving its network, at multiple levels. First, AS-level paths are selected via the BGP protocol, which produces a sequence of ASs through which packets will flow. Each AS may have multiple paths, offered by neighboring ASs, from which to choose. Its decision often involves business relationships with these neighboring ASs,[3] which may be unrelated to path quality or latency. Second, once an AS-level path has been selected, there are often multiple corresponding router-level paths, in part because two ISPs may be connected in multiple locations. In choosing the single router-level path, it is common practice for each ISP to employ hot-potato routing: sending traffic along the path that minimizes the distance through the ISP's own network—even if that path lengthens the total distance to the destination.

Consider two ISPs, A and B, which each have a presence in New York, connected by a fast link with latency 5 ms; and which each have a presence in London connected by a 5 ms link. Suppose both ISPs have trans-Atlantic links connecting their two networks, but A's link has latency 100 ms and B's has latency 120 ms. When routing a message from a source in A's London network to a destination in B's New York network, A may choose to immediately send the message to B in London. This saves A the work of sending it along an expensive trans-Atlantic link, but causes the message to experience latency 125 ms when the other route would have been 20 ms faster.

A 2003 measurement study of Internet routes found that, between pairs of neighboring ISPs, more than 30% of paths have inflated latency due to hot-potato routing, with 5% of paths being delayed by at least 12 ms. Inflation due to AS-level path selection, while substantial, was attributed primarily to BGP's lack of a mechanism to directly optimize for latency, rather than to selfish routing policies. It was also suggested that, were an appropriate mechanism in place, ISPs would be willing to cooperate to reduce latency rather than use hot-potato routing.[4]

Such a mechanism was later published by the same authors, first for the case of two ISPs[5] and then for the global case.[6]

Route analytics

As the Internet and IP networks become mission critical business tools, there has been increased interest in techniques and methods to monitor the routing posture of networks. Incorrect routing or routing issues cause undesirable performance degradation, flapping and/or downtime. Monitoring routing in a network is achieved using Route analytics tools and techniques

See also

[edit] Routing algorithms and techniques

Routing in specific networks

[edit] Routing protocols

Alternative methods for network data flow

[edit] Router Software and Suites

[edit] Router Platforms

0
Your rating: None

VISSOFT 2011, the 6th IEEE International Workshop on Visualizing Software for Understanding and Analysis, took place in Williamsburg, Virginia, USA, September 29-30, 2011 and was co-located with ICSM 2011. The proceedings are available in the IEEE Xplore Digital Library. Of the 21 full papers submitted to the workshop, nine were accepted (acceptance rate: 42%).

This post is by Fabian Beck a new contributor to the SoftVis blog and presents the nine full papers of VISSOFT 2011 in form of a structured review.

Code Dependencies

Dependencies between source code artifacts provide insights in the structure of a software system. Visualizing them is challenging because of the sheer quantity of dependencies usually included in a system. Often the hierarchical organization of the system (e.g., a hierarchical package structure) helps to structure or simplify these dependencies. Among the papers presented at VISSOFT 2011, many involve the visualization of code dependencies, the following three, however, having them in particular focus.

Caserta et al. use the hierarchical organization to map the source code artifacts to a two-dimensional plane exploiting a city metaphor. On top of this plane, in the third dimension, they draw the dependencies that link the artifacts. The authors reduce the visual complexity of the diagram by bundling the dependencies, again using the hierarchical structure of the artifacts. Such hierarchical edge bundling is also used in the approach by Beck F. et al., which focuses on comparing different types of code dependencies. They arrange the hierarchically structured artifacts linearly from top to bottom. Each type of dependency is depicted in a narrow stripe. By arranging the stripes side-by-side these types can be visually compared. The idea of depicting different types of dependencies is also present in the approach by Erdemir et al. They link those dependencies to different quality metrics by encoding them in the shape, color, texture, etc., of the nodes. A distinguishing feature of their approach is the intuitive mapping between metrics and visual attributes.

Dynamic Program Behavior

The analysis of the dynamic behavior of software systems is usually facing even larger sets of data than the analysis of static code dependencies. Visualization approaches depicting such dynamic data, hence, have to apply some form of aggregation to handle the data. As the following two examples show, this aggregation can be realized in different ways.

Code Dependencies can be enriched by dynamic information, for instance, as demonstrated by Deng et al.: They enforce structural dependencies if the connected code entities are covered by the same set of test cases. This dynamic relational information is only used to arrange a large set of code entities onto a two-dimensional plane. For reasons of scalability, their visualization shows these entities as color-coded dots, but does not depict dependencies. Choudhury and Rosen introduce an animated visualization technique for representing runtime memory transactions of a program. Elements of accessed data, visually encoded as glyphs, move between levels of a simulated cache. These glyphs form abstract visual structures that provide insights into the composition and eviction of the cache. The authors suggest that their approach fosters program comprehension with respect to the dynamic behavior of the program.

Visual Analytics Tools & Case Studies

Over the last decade, many software visualization approaches have been proposed, but only very few of them reached wide impact in practice. A reason could be that academia did not sufficiently embed those visualizations into real-world scenarios and development processes. The following analytics tools and case studies might lessen this gap.

When gaining information by visually analyzing a software system, a software developer wants to react to these new insights. Beck M. et al. introduce a tool for planning the reengineering of a software system, that is, changing the software design at a high level. Their tool is also based on a visualization of code dependencies attached with a hierarchy. The innovative features of their approach are the flexible interaction techniques for changing the hierarchy (i.e., the software design). Also Broeksema and Telea target at supporting developers in changing a software system: They investigate the effort to port a system from an older version of a library to a newer one. Their visualization, which they implemented as an IDE plug-in for KDevelop, provides an overview on the necessary changes and guide the developer to realize them.

Neu et al. study not only single software projects, but the ecosystems the individual projects are embedded in. They developed a web application that visualizes the evolution of such ecosystems. As an extensive case study, the GNOME project is analyzed. Long practiced but only recently considered as a serious development method, sketches are explored by Walny et al. In semi-structured interviews they evaluated how software developers in academia use these ad-hoc software visualizations. The interviews reveal that sketches are applied in a broad spectrum of scenarios and are often saved and reused in later stages of development.

0
Your rating: None