Aria Networks shows the optimal path

April 26, 2007

In a couple of previous posts, Path Computation Element (PCE): IETF’s hidden jewel and MPLS-TE and network traffic engineering I mentioned a company called Aria Networks who are working in the technology space discussed in those posts. I would like to take this opportunity to write a little about them.

Aria Networks are a small UK company that have been going for around eighteen months. The company is commercially led by Tony Fallows, the core technology has been developed by Dr Jay Perrett, Chief Science Officer and Head of R&D and Daniel King, Chief Operating Officer and their CTO is Adrian Farrel. Adrian currently co-chairs the IETF Common Control and Measurement Plane (CCAMP) working group that is responsible for GMPLS and also co-chairs of the IETF Path Computation Element (PCE) working groups.

The team at Aria have brought some very innovative software technology the products they supply to network operators and the network equipment vendors. Their raison d’etre, as articulated by Daniel King, is to “to fundamentally change the way complex, converged networks are designed, planned and operated”. This is an ambitious goal, so let’s take a look at how Aria plan to achieve this.

Aria currently supplies software that addresses the complex task of computing packet constrain-based paths across an IP or an MPLS network and optimising that network holistically and in parallel. Holistic is a key word in respect of understanding Aria products. It means that when an additional path needs to be computed in a network, the whole network and all the services that are running over it are recalculated and optimised in a single calculation. A simple example of why this is so important is shown here.

This ability to compute holistically rather than on a piecemeal basis requires some very slick software as it is a very computationally intensive (‘hard’) computation that could easily take many hours using other systems. Parallel is the other key word. When an additional link is added to a network there could be a knock-on effect to any other link in the network, therefore re-computing all the paths in parallel – both existing and new – is the only way to ensure a reliable and optimal result is achieved.

Traffic engineering of IP, MPLS or Ethernet networks could quite easily be dismissed by the non-technical management of a network operator as an arcane activity but, as anyone with experience of operating networks can vouch, good traffic engineering brings pronounced benefits that directly affects the reduction of costs while increasing the positive aspects of customers’ experience of using services. Of course, lack of appropriate traffic engineering activity has the opposite effect. Only one thing could be put above traffic engineering to better achieve a good brand image and that is good customer service. The irony is that if money is not spent on good traffic engineering, ten times the amount would need to be spent on call centre facilities papering over the cracks!

One quite common view held by a number of engineers is that traffic engineering is not required because they say “we throw bandwidth at our network”. If a network has an abundance of bandwidth then in theory there will never be any delays caused by an inadvertent overload of a particular link. This may be true, but it is certainly an expensive and short sighted solution and one that could turn out to risky as new customers come on board. Combine it with the often slow provisioning times often associated with adding additional optical links can cause major network problems. The challenge of planning and optimising if significantly increased in a Next Generation Network (NGN) when traffic is actively segmented into different traffic classes such as real time VoIP and best-effort Internet access. Traffic engineering tools will become an even more indispensable tool than they have in the past.

It’s interesting to note that even if a protocol like MPLS-TE, PBT, PBB-TE or T-MPLS has all the traffic bells and whistles any carrier may ever desire, it does not mean they can be used. Using TE extensions such as Fast ReRoute (FRR) need sophisticated tools or they quickly become unmanageable in a real network.

Aria’s product family is called intelligent Virtual Network Topologies (iVNT). Current products are aimed at network operators that operate IP and / or MPLS-TE based networks.

iVNT MPLS-TE enables network operators design, model and optimise MPLS – Traffic Engineered (MPLS-TE) networks that use constraint based point-to-point Labelled Switched Paths (LSPs), constraint based point-to-multipoint LSPs, Fast-Reroute (FRR) bypass tunnels. One of its real strengths is that it goes to town on supporting any type of constraint that could be placed on a link – delay, hop count, cost, required bandwidth, link-layer protection, path disjointedness, bi-directionality, etc. Indeed, it quite straight forward to add any addition constraints that an individual carrier may need.

iVNT IP enables network operators to design, model and optimise IP and Label Distribution Protocol (LDP) networks based on the metrics used in Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System (IS-IS) Interior Gateway Protocols (IGPs) to ensure traffic flows are correctly balanced across the network. Although not using more advanced traffic engineering capabilities is clearly not the way to go in the future, many carriers still stick with ‘simple’ IP solutions – however they are far from simple in practise and can be an operational nightmare to manage.

What makes Aria software so interesting?

To cut to the chase, it’s the use of Artificial Intelligence (AI) when applied to path computation and network optimisation. Conventional algorithms used in network optimisation software are linear in nature and are usually deterministic in that they produce the same answer for the same set of variables every time they are run. They are usually ‘tuned’ to a single service type and are often very slow to produce results when faced with a very large network that uses many paths and carries many services. Aria’s software approach may produce different results but correct, results each time it is run and is able to handle multiple services that are inherently and significant different from a topology perspective e.g. point-to-point (P2P), Point -to-Multipoint (P2MP) based services and mesh-like IP-VPNs etc.

Aria uses evolutionary and genetic techniques which are good at learning new problems and running multiple algorithms in parallel. The software then selects which algorithm is the better at solving the particular problem they are challenged with. The model evolves multiple times and quickly converges on the optimal solution. Importantly, the technology is very amenable to use parallel computing to speed up processing of complex problems such as required in holistic network optimisation.

It generally does not make sense to use the same algorithm to solve all the network path optimisation needs of different services – iVNT runs many in parallel and self selection will show which is the most optimal for the current problem.

Aria’s core technology is called DANI (Distributed Artificial Neural Intelligence) and is a “flexible, stable, proven, scalable and distributed computation platform”. DANI was developed by two of Aria’s Founders, Jay Perrett and Daniel King and has had a long proving ground in the pharmaceutical industry for pre-clinical drug discovery which needs the analysis of millions of individual pieces of data to isolate interesting combinations. The company that addresses the pharmaceutical industry is Applied Insilico.

Because of the use of AI, iVNT is able to compute a solution for a complex network containing thousands of different constraint-based links, hundreds of nodes and multiple services such as P2P LSPs, Fast Reroute (FRR) links, P2MP links (IPTV broadcast) and meshed IP-VPN services in just a few minutes on one of today’s norebooks.

What’s the future direction for Aria’s products?

Step to multi-layer path computation: As discussed in the posts mentioned above, Aria is very firmly supportive of the need to provide automatic multi-layer path computation. This means that the addition of of a new customer’s IP service will be passed as a bandwidth demand the MPLS network and downwards to the GMPLS controlled ASTN optical network as discussed in GMPLS and common control.

Path Computation Element (PCE): Aria are at the heart of the development of on-line path computation so if this is a subject of interest to you then give Aria a call.

Two product variants address this opportunity:

iVNT Inside is aimed at Network Management System (NMS) vendors, Operational Support System (OSS) vendors and Path Computation Element (PCE) vendors that have a need to provide advanced path computation capabilities embedded in their products.

iVNT Element is for network equipment vendors that have a need to embed advanced path computation capabilities in their IP/MPLS routers or optical switches.

Roundup

Aria Networks could be considered to be a rare company in the world of start-ups. It has a well tried technology whose inherent characteristics are admirably matched to the markets and the technical problems it is addressing. Its management team are actively involved in developing the standards that their products are, or will be, able to support. This provides no better basis to get their products right.

It is early days for carriers turning in their droves to NGNs and it is even earlier days for them to adopt on-line PCE in their networks, but Aria’s timing is on the nose as most carriers are actively thinking about these issues and are actively looking for tools today.

Aria could be well positioned to benefit from the explosion of NGN convergence as it seems – to me at least – that fully converged networks will very challenging to design, optimise and operate without the new approach and tools from companies such as Aria.

Note: I need to declare an interest as I worked with them for a short time in 2006.


PONs are anything but passive

April 18, 2007

Passive Optical Networks (PONs) are an enigma to me in many ways. On one hand the concept goes back to late 1980s and has been floating around ever since with obligatory presentations from the large vendors whenever you visited them. Yes, for sure there are Pacific Rim countries and the odd state or incumbent carrier in the western world deploying the technology, but they never seemed to impact my part of the world.On the other hand, the technology would provide virtual Internet nirvana for me at home with 100Mbit/s second available to support video on demand to each member of my family who, in 21st century fashion, have computers in virtually every home of the house! This high bandwidth still seems as far away as ever with an average speed of 4.5Mbit/s downstream bandwidth in the UK. I see 7Mbit/s as I am close to a BT exchange. We are still struggling to deliver 21st century data services over 19th century copper wires using ATM-based Digital Subscriber Line (DSL) technology. If you are in the right part of the country, you get marginally higher rates from your cable company. Why are carriers not installing optical fibre to every home?

To cut to the chase, it’s because of the immense costs of deploying it. Fibre to the Home (FTTH) as it is known, requires the installation of a completely new optical fibre infrastructure between the carrier’s exchanges and homes. Such an initiative would almost require a government led and paid for initiative to make it worthwhile – which of course is what has happened in the far east. Here in the UK this is further clouded by the existing cable industry which has struggled to reach profitability based on massive investments in infrastructure during the 90s.

What are Passive Optical Networks (PONs)?

The key word is passive. In standard optical transmission equipment, used in the core of public voice and data networks, all of the data being transported is switched using electrical or optical switches. This means that investment needs to be made in the network equipment to undertake that switching and that is expensive. In a PON, instead of electrical equipment joining or splitting optical fibres, fibres are just welded together at minimum cost – just like T-junctions in domestic plumbing. Light travelling down the fibre then splits or joins when it hits a splice. Equipment in the carrier’s exchange (or Central Office [CO]), and the customer’s home then multiplex or de-multiplex an individual customer’s data stream.

Although the use of PONs considerably reduces equipment costs as no switching equipment is required in the field and hence no electrical power feeds are required, it is still an extremely expansive technology to deploy making it very difficult to create a business case that stacks up. A major problem is that there is often no free space available in existing ducts pushing carriers to a new digs. Digging up roads and laying the fibre is a costly activity. I’m not sure what the actual costs are these days, but £50 per metre dug used to be the cost many years ago.

As seems to be the norm in most areas of technology, there are two PON standards slugging it out in the market place with a raft of evangelists attached to both camps. The first is Gigabit PON (GPON) and the second is Ethernet PON (EPON).

About Gigabit PON (GPON)

The concept of PONs goes back to the early 1990s to the time when the carrier world was focussed on a vision of ATM being the world’s standard packet or cell based WAN and LAN transmission technology. This never really happened as I discussed in The demise of ATM but ATM lives on in other services defined around that time. Two examples are broadband Asynchronous DSL (ADSL) and the lesser known ATM Passive Optical Network (APON).

APON was not widely deployed and was soon superseded with the next best thing – Broadband PON (BPON) also known as ITU-T G.983 as it was developed under the auspices of the ITU. More importantly APON was limited to the number of data channels it could handle and BPON added Wave Division Multiplex (WDM) (covered in Technology Inside in Making SDH, DWDM and packet friendly). BPON uses one wavelength for 622Mbit/s downstream traffic and another for 155-Mbit/s upstream traffic.

If there are 32 subscribers on the system, that bandwidth is divided among the 32 subscribers-plus overhead. Upstream, a BPON system provides 3 to 5 Mbits/sec when fully loaded.

GPON is the latest upgrade from this stable, uses an SDH data framing standard and provides a data rate of 2.5Gbit/s downstream and 1.25-Gbit/s upstream. The big technical difference is that GPONs are based on Ethernet and IP rather than ATM.

It is likely that GPON will find its natural home in the USA and Europe. An example is Verizon who is deploying 622Mbit/s BPON to its subscribers but is committed to upgrade to GPON within twelve months. In the UK, BT’s OpenReach has selected GPON for a trial.

About Ethernet PON (EPON)

EPONs comes from the IEEE stable and is called IEEE 802.3ah. EPONs are based on Ethernet standards and derives the benefits of using this commonly adopted technology. EPON only uses a single fibre between the subscriber split and the central office and does not require any power in the field such as needed if a kerb-side equipment was required. EPON also supports downstream Point to Multipoint (P2MP) broadcast which is very important for broadcasting video. As with carrier-grade Ethernet standards such as PBB, some core Ethernet features such as CSMA/CD have been dropped in this new use of Ethernet. Only one subscriber at a time is able to transmit at any time using a Time Division Multiplex Access (TDMA) protocol.

Typical deployment is shown in the picture below, one fibre to the exchange connecting 32 subscribers.

EPON architecture (Source: IEEE)

A Metro Ethernet Forum overview of EPON can be found here.

The Far East, especially Japan, has taken EPON to its heart with the vast majority being installed by NTT, the major Japanese incumbent carrier, followed by Korea Telecom with 100s of thousands of EPON connections.

Roundup

There is still lots of development taking place in the world of PONs. On one hand 10Gbit/s EPON is being talked about to give it an edge over 2.5Gbit/s GPON. On the other, WDM PONs are being trialled in the Far East which would enable far higher bandwidths to be delivered to each home. WDM-PON systems allocate a separate wavelength to each subscriber, enabling the delivery of 100 Mbits/s or more .

Only this month it was announced that a Japanese MSO Moves 160 Mbit/s using advanced cable technology (the subject of a future TechnologyInside post).

DSL based broadband suffers from a pretty major problem. The farther the subscriber is away their local exchange, the lower the data rate that can be supported reliably, PONs do not have this limitation (well technically they do but the distance is much greater). So in the race to increase data rates in the home PONs are a clear cut winner along with cable technologies such as DOCSIS 3.0 used by cable operators.

Personally, I would not expect that PON deployment will increase over and above its snail-like pace in Europe at any time in the near future. Expect to see the usual trials announced by the largest incumbent carriers such as BT, FT and DT but don’t hold your breath waiting for it to arrive at your door. This has been questioned recently in a government report where the lack of high-speed internet access could jeopardise the UK’s growth in future years.

You may think so what – “I’m happy with 2 – 7Mbit/s ADSL!”, but I can say with confidence that you should not be happy. The promise of IPTV services are really starting to be delivered at long last and encoding bandwidths of 1 to 2Mbit’s really do not cut the mustard in the quality race. This is case for standard, let alone for high definition TV. Moreover, with each family member having a computer and television in their own room and each wanting to watch or listen to their own programmes simultaneously, low speed ADSL connections are far from adequate.

One way out of this is to bond multiple DSL lines together to gain that extra bandwidth. I wrote a post a few weeks ago – Sharedband: not enough bandwidth? – who provides software to do just this. The problem is that you would require an awful lot of telephone lines to get 100Mbit/s which is what I really want! Maybe I should emigrate?

Addendum #1: Economist Intelligence Unit: 2007 e-readiness rankings


GMPLS and common control

April 16, 2007

From small beginnings MultiProtocol Label Switching (MPLS) has come a long way in ten years. Although there are a considerable number of detractors who believe it costly and challenging to manage, it has now been deployed in just about all carriers around the world in one guise or another (MPLS-TE) as discussed in The rise and maturity of MPLS. Moreover, it is now extending its reach down the stack into the optical transmission world through activities such as T-MPLS covered in PBB-TE / PBT or will it be T-MPLS? (Picture: GMPLS: Architecture and Applications by Adrian Farrel and Igor Bryskin).In the same way that early SDH standards did not encompass appropriate support for packet based services as discussed in Making SDH, DWDM and packet friendly, initial MPLS standards were firmly focussed on IP networks not for use with optical wavelength or TDM switching.

The promise of MPLS was to bring the benefits of a connection-oriented regime to the inherently connectionless word of IP networks and be able to send traffic along pre-determined paths thus improving performance. This was key for the transmission of real time or isochronous services such as VoIP over IP networks. Labels attached to packets enabled the creation of Label Switched paths (LSPs) which packets would follow through the network. Just as importantly, it was possible to specify the quality of service (QoS) of an LSP thus enabling the prioritisation of traffic based on importance.

It was inevitable that MPLS would be extended to enable it to be applied to the optical world and this is where the IETF’s Generalised MPLS (GMPLS) standards comes in. Several early packet and data transmission standards bundled together signalling and data planes in vertical ‘stove-pipes’ creating services that needed to be managed from top to bottom completely separately from each other.

The main vision of GMPLS was to create a common control plane that could be used across multiple services and layers thus considerably simplifying network management by automating end-to-end provisioning of connections and centrally managing network resources. In essence GMPLS extends MPLS to cover packet, time, wavelength and fibre domains. A GMPLS control plane also lies at the heart of T-MPLS replacing older proprietary optical Operational Support Systems (OSS) supplied by optical equipment manufacturers. GMPLS provides all the capabilities of those older systems and more.

GMPLS is also often referred to as Automatic Switched Transport Network (ASTN) although GMPLS is really the control plane of an ASTN.

GMPLS extends MPLS functionality by creating and provisioning:

  • Time Division Multiplex (TDM) paths, where time slots are the labels (SONET / SDH).
  • Frequency Division Multiplex (FDM) paths, where optical frequency such as seen in WDM systems is the label.
  • Space Division Multiplexed (SDM) paths, where the label indicates the physical position of data – photonic Cross-connects
Switching Domain Traffic Type Forwarding Scheme Example of Device
Packet, cell IP, ATM Label IP router, ATM switch
Time TDM SONET/SDH Time slot Digital cross-connects
Wavelength Transparent Lambda DWDM
Physical space Transparent Fiber, line OXC

GMPLS applicability

GMPLS has extended and enhanced the following aspects of MPLS:

  • Signalling RSVP-TE and CR–LDP
  • Routing protocols – OSPF–TE and IS-IS-TE

GMPLS has also added:

  • Extensions to accommodate the needs of SONET / SDH and optical networks.
  • A new protocol, link-management protocol (LMP), to manage and maintain the health of the control and data planes between two neighbouring nodes. LMP is an IP-based protocol that includes extensions to RSVP–TE and CR–LDP.

As GMPLS is used to control highly dissimilar networks operating at different levels in the stack, there are a number of issues it needs to handle in a transparent manner:

  • It does not just forward packets in routers, but needs to switch in time, wavelength or physical ports (space) as well.
  • It should work with all applicable switched networks OTN, SONET / SDH, ATM and IP etc.
  • There are still many switches that are not able to inspect traffic and thus not able to extract labels – this is especially true for TDM and optical networks.
  • It should facilitate dissimilar network interoperation and integration.
  • Packet networks work at a finer granularity than optical networks – it would not make sense to allocate a 622Mbit/s SDH link to a 1Mbit/s video IP stream by mistake.
  • There is a significant difference in scale between IP and optical networks from a control perspective – optical networks being much larger with thousands of wavelengths to manage.
  • There is often a much bigger latency in setting up an LSP on an optical switch than there is on an IP router.
  • SDH and SONET systems can undertake a fast switch restoration in less than 50mS in case of failure – a GMPLS control plane needs to handle this effectively.

Round-up

GMPLS / ASTN is now well entrenched in the optical telecommunications industry with many, if not most, of the principle optical equipment manufacturers demonstrating compatible systems.

It’s easy to see the motivation to create a common control plane (GMPLS was defined under the auspices of the IETF’s Common Control and Measurement Plane (ccamp) working group) as it would would considerably reduce the complexity and cost of managing fully converged Next Generation Networks (NGNs). Indeed, it is hard to see how any carrier could implement a real converged network without it.

As discussed in Path Computation Element (PCE): IETF’s hidden jewel converged NGNs will need to compute service paths across multiple networks, across multiple domains and automatically pass service provision at the IP layer down to optical networks such as SDH and ASTN. Again, it is hard to see how this vision can be implemented without a common control plane and GMPLS.

To quote the concluding comment in GMPLS: The Promise of the Next-Generation Optical: Control Plane (IEEE Communiction Magazine July 2005 Vol.43 No.7):

“we note that far from being abandoned in a theoretical back alley, GMPLS is very much alive and well. Furthermore, GMPLS is experiencing massive interest from vendors and service providers where it is seen as the tool that will bring together disparate functions and networks to facilitate the construction of a unified high-function multilayer network operators will use as the foundation of their next-generation networks. Thus, while the emphasis has shifted away from the control of transparent optical networks over the last few years, the very generality of GMPLS and its applicability across a wide range of switching technologies has meant that GMPLS remains at the forefront of innovation within the Internet. “


Chaos in Bangladesh’s ‘lllegal’ VoIP businesses

April 11, 2007

Chaos in Bangladesh’s Illegal VoIP business

Take a listen to a report on BBC Radio Four’s PM programme broadcast on the 9th April which talks about the current chaos in Bangladesh brought about by the enforced closure of ‘illegal’ VoIP businesses. This is one of the impacts of the state of emergency imposed three months ago and has resulted in a complete breakdown of the Bangladeshi phone network.

It seems that VoIP calls accounts for up to 80% of telephone traffic from abroad in the country driven by low call rates of between 1 and 2pence per minute.

The new military backed government has been waging war on small VoIP businesses with the “illegality and corruptions of the the past being too long tolerated”. Many officials have been arrested, buildings pulled down and businesses closed.

The practical result has thrown the telephone industry into chaos as hundreds of thousands of Bangladeshi’s living abroad try to call home home only to get the engaged tone.

“In many countries VoIP is legal but in Bangladesh it has been long rumoured that high profile politicians have been operating the VoIP businesses and had an interest in keeping them outside of the law and unregulated to avoid taxes on the enormous revenues they generated.”

The report says that the number of conventional phone lines is being doubled in April but to 30,000 lines but with a population of over 140 million people this is too few!

You can listen to the report here Chaos in Bangladesh’s Illegal VoIP business Copyright BBC

It really is amazing how disruptive a real disruptive technology can be, but when this happens it usually comes back to bite us!

I talked about the Sim Box issue in Revector, detecting the dark side of VoIP, and the Bangladesh situation provides the reasoning about why incumbent carriers are often hell bent on stamping VoIP traffic out. In the western world, the situation is no different, but governments and carriers do not just bulldoze the businesses – maybe they should in some cases!

Addendum #1: the-crime-of-voice-over-ip-telephony/


Path Computation Element (PCE): IETF’s hidden jewel

April 10, 2007

In a previous post MPLS-TE and network traffic engineering, I talked about the challenges of communication network traffic engineering and capacity planning and their relation to MPLS-TE (or MPLSTE). Interestingly, I realised that I did not mention that all of the engineering planning, design and optimisation activities that form the core of network management usually take place off-line. What I mean by this, is that a team of engineers sit down either on an ad hoc basis driven by new network or customer acquisitions or as part of an annual planning cycle to produce an upgrade or migration plan that can be used to extend their existing network to meet the needs of the additional traffic. This work does not impact live networks until the OPEX and CAPEX plans have been agreed and signed off by management teams and then implemented. A significant proportion of data that drives this activity is obtained from product marketing and/or sales teams who are supposed to know how much additional business, hence additional traffic, will be imposed on the network in the time period coved by planning activities.

This long-term method of planning network growth has been used since the dawn of time and the process should put in place the checks and balances (that were thrown to the wind in the late 1990s) to ensure that neither too much nor too little investment is made in network expansion.

What is Path Computation Element (PCE)

What is a path through the network? I’ve covered this extensively in my previous posts about MPLS’s ability to guide traffic through a complex network and force particular packet streams to follow a constraint-based and pre-determined path from network ingress to network egress. This deterministic path or tunnel enables the improved QoS management of real-time services such as Voice over IP or IPTV.

Generally paths are calculated and managed off-line as part of the overall traffic engineering activity. When a new customer is signed up, their traffic requirements are determined and the most appropriate paths for the traffic superimposed on the current network topology that would best meet the customer’s needs and balance traffic distribution on the network. If new physical assets are required, then these would be provisioned and deployed as necessary.

Traditional planning cycles are traditionally focussed on medium to long term needs and cannot really be applied to shorter planning needs. Such short term needs could derive from a number of requirements such as:

  • Changing network configurations dependent on the time of day, for example, there is usually a considerable difference traffic profiles between office hours, evening hours and night time. The possibility of dynamically moving traffic dependent on busy hours (Time being the new constraint) could provide significant cost benefits.
  • Dynamic or temporary path creation based on customers’ transitory needs.
  • Improved busy hour management through auto-rerouting of traffic.
  • Dynamic balancing of network load to reduce congestion.
  • Improved restoration when faults occur.

To be able to undertake these tasks a carrier would need to move away from off-line Path Calculation to On-line Path Calculation and this is where IETF’s Path Computation Element (PCE) Working Group comes to the rescue.

In essence, on-line PCE software acts very much along the same lines a graphics chip handles off-loaded calculations for the main CPU in a personal computer. For example, a service requires that a new path be generated through the network and that request, together with the constrained-path requirements for the path such as bandwidth, delay etc., is passed to the attached PCE computer. The PCE has a complete picture of flows and paths in the network at the precise moment derived from other Operational Support Software (OSS) programmes so it can calculate in real time the optimal path through the network that will deliver the requested path. This path is then used to automatically update router configurations and Traffic engineering database.

In practice, the PCE architecture calls for each Autonomous System (AS) domain to have its own PCE and if a multi-domain path is required the affected PCEs will co-operate to calculate the required path with the requirement provided by a ‘master’ PCE. The standard supports any combination, number or location of PCEs.

Why a separate PCE?

There are a number of reasons why a separate PCE is being proposed:

  • Path Computation of any form is not an easy and simple task by any means. Even with appropriate software, computing all the primary, back-up and services paths on a complex network will strain computing techniques to the extreme. A number of companies that provide software capable of undertaking this task were provided in the above post.
  • The PCE will need undertake computationally intensive calculations so it is unlikely (to me) that a PCE capability would ever be embedded into a router or switch as they generally do not have the power to undertake path calculations in complex network.
  • If path calculations are to be undertaken in a real-time environment then, unlike off-line software which can take hours for an answer to pop out, a PCE would needs to provide an acceptable solution in just a few minutes or seconds.
  • Most MPLS routers calculate a path on the basis of a single constraint e.g. the shortest path. Calculating paths based on multiple constraints such as bandwidth, latency, cost or QoS significantly increases the computing power required to reach a solution.
  • Routers route and have limited or partial visibility of the complete network, domain and service mix and thus are not able to undertake the holistic calculations required in a modern converged network.
  • In a large network the Traffic engineering database (TED) can become very large creating a large computational overhead for a core router. Moving TED calculations to a dedicated PCE server could be beneficial in lowering path request response times.
  • In a traditional IP network there may be many legacy devices that do not have an appropriate control plane thus creating visibility ‘holes’.
  • A PCE could be used to provide alternative restorative routing of traffic in an emergency. As a PCE would have a holistic view of the network, restoration using a PCE could reduce potential knock-on effects of a reroute.

The key aspect of multi-layer support

One of the most interesting architecture aspects of the PCE is to address a very significant issue faced by all carriers today – multi-network support. All carriers utilise multiple layers to transport traffic – these could include IP-VPN, IP, Ethernet, TDM, MPLS, SDH and optical networks in several possible combinations. The issue is that a path computation at the highest level inevitably has a knock-on effect down the hierarchy to the physical optical layer. Today, each of these layers and protocols are generally managed, planned and optimised as separate entities so it would make sense that when a new path is calculated, its requirements are passed down the hierarchy so that knock-on effects can be better managed. The addition of a new small IP link could force the need to add an additional fibre.

Clearly, providing flow though and visibility of new services to all layers and manage path computation on a multi-layer basis would be a real boon for network optimisation and cost reduction. However, let’s bear in mind that this represents a nirvana solution for planning engineers!

A Multi-layer path

The PCE specification is being defined to provide this across layer or multi-layer capability. Note that a PCE is not a solution aimed at use on the whole Internet – clearly this would be a step just too challenging along the lines of the whole Internet upgrading IPV-6!

I will not plunge into the deep depths of the PCE architecture here, but a complete overview can be found in A Path Computation Element (PCE) Based Architecture (RFC 4655). At the highest level the PCE talks to a signalling engine that takes in requests for a new path calculation and passes any consequential requests to other PCEs that might be needed for an inter-domain path. The PCE also interacts with the Traffic Engineering Database to automatically update it if and as required (Picture source: this paper).

Another interesting requirement document is Path Computation Element Communication Protocol (PCECP) Requirements .

Round up

It is very early days for the PCE project, but it would seem to provide one of the key elements required to enable carriers to effectively manage a fully converged Next Generation Network. However, I would imagine that the operational management in many carriers would be aghast at putting the control of even transient path computation on-line when considering the risk and the consequence to customer experience if it went wrong.

Clearly PCE architecture has to be based on the use of powerful computing engines, software that can holistically monitor and calculate new paths in seconds and most importantly be a truly resilient network element. Phew!

Note: One of the few commercial companies working on PCE software is Aria Networks who are based in the UK and whose CTO, Adrian Farrell, is also Chairman of the PCE Working Group. I do declare an interest as I undertook some work for Aria Networks in 2006.

Addendum #1: GMPLS and common control

Addendum #2: Aria Networks shows the optimal path

Addendum #3: It was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ or function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?

Addendum #4: New web site focusing on PCE: http://pathcomputationelement.com


iotum’s Talk-Now is now available!

April 4, 2007

In a previous post The magic of ‘presence’, I talked about the concept of presence in relation to telecommunications services and looked at different examples of how it had been implemented in various products.

One of the most interesting companies mentioned was iotum, a Canadian company. iotum had developed what they called a relevance engine which enabled the provision of ability to talk and willingness to talk information into a telecom service by attaching it to appropriate equipment such as a Private Branch Xchanges (PBX) or a call centre Automatic Call Distribution (ACD) managers.

One of the biggest challenges for any company wanting to translate presence concepts into practical services is how to make it useable rather than just being just a fancy concept that is used to describe a of a number peripheral and often unusable features of a service. Alec Saunders, iotum’s founder, has been articulating his ideas about this in his blog Voice 2.0: A Manifesto for the Future. Like all companies that have their genesis in the IT and applications world, Alec believes that “Voice 2.0 is a user-centric view of the world… “it’s all about me” — my applications, my identity, my availability.

And rather controversially, if you come from the network or the mobile industry: “Voice 2.0 is all about developers too — the companies that exploit the platform assets of identity, presence, and call control. It’s not about the network anymore.” Oh by the way, just to declare my partisanship, I certainly go along with this view and often find that the stove-pipe and closed attitudes sometimes seen in mobile operators is one the biggest hindrances to the growth of data related applications on mobile phones.

There is always a significant technical and commercial challenge to OEMing platform-based services to service providers and large mobile operators so the launch of a stand-alone service that is under complete control of iotum is not a bad way to go. Any business should have to full control of their own destiny and the choice of the relatively open Blackberry platform gives iotum a user base they can clearly focus on to develop their ideas.

iotum launched the beta version of Talk-Now in January and provides a set of features that are aimed at helping Blackberry users to make better use of the device that the world has become addicted to using in the last few years. Let’s talk turkey, what does the Talk-Now service do?

According to web site, as seen in the picture on the left, it provides a simple-in-concept bolt-on service for Blackberry phone users to see and share their availability status to other users.

At the in-use end of the service, the Talk-Now service interacts with a Blackberry user’s address book by adding colour coding to contact names to show the individual’s availability. On initial release only three colours were used, white, red and green.

Red and and green clearly show when a contact is either Not-Available or Available, I’ll talk about white in a minute. Yellow was added later, based on user feedback, to indicate an Interruptible status.

The idea behind Talk-Now is that helps users reduce the amount of time they waste in non-productive calls and leaving voicemails. You may wonder how this availability guidance is provided by users. A contact with a white background provides the first indication of how this is achieved.

Contacts with a white background are not Talk-Now users so their availability information is not available (!) so one of the key features of the service is an Invite People process to get them to use Talk-Now and see your availability information.

If you wish a non-Talk-Now contact to see your availability, you can select their name from the contact list and send them an “I want to talk with you” email. This email will provide a link to an Availability Page as shown below. This email talks about the benefits of using the service (I assume) and asks you to use the service. This is a secure page that is only available to that contact and for a short time only.

Once a contact accepts the invite and signs up to the service, you will be able to see their availability – assuming that they set up the service.

So, how do you indicate your availability? This is set up with a small menu as shown on the left. Using this you can set up status information.

Busy: set your free/busy status manually from your BlackBerry device

In a meeting: iotum Talk-Now synchronizes with your BlackBerry calendar to know if you are in a meeting.

At night: define which hours you consider to be night time.

Blocked group: you can add contacts to the “blocked” group.

You can also set up VIPs (Very Important Persons) who are individuals who receive priority treatment. This category needs to be used with care. Granting VIP status to a group overrides the unavailability settings you have made. You can also define Workdays. Some groups might be VIPs during work hours, while other groups might get VIP status outside of work. This is designed to help you better manage your personal and business communications.

There is also a feature whereby you can be alerted when a contact becomes available by a message being posted on your Blackberry as shown on the right.

 

Many of the above setting can be set up via a web page, for example:

Setting your working week

Setting contact groups

However, it should be remembered that like Plaxo and LinkedIn, this web based functionally does require you to upload – ‘synchronise’ – your Blackberry contact list to the iotum server and many Blackberry users might object to this. It should be noted as well that the calendar is accessed as well to determine when you are in meetings and deemed busy.

If you want to hear more, then take a look at the video that was posted after a visit with Alec Saunders and the team by Blackberry Cool last month:

Talk-Now looks to be an interesting and well thought out service. Following traditional Web 2.0 principles, the service is provided for free today with the hope that iotum will be able to charge for additional features at a future date.

I wish them luck in their endeavours and will be watching intensely to see how they progress in coming months.


Subsea network provider Azea secures $20M in more funding

April 2, 2007

Following the commissioning of a transoceanic submarine cable upgrade, Azea Networks has secured a Series D funding round of more than $20 million led by TVM Capital and supported by its existing investors.

Mike Hynes, Azea’s chief operating officer, comments, “The successful completion of our latest project once again validates the compelling business case for upgrades whereby carriers can optimize their existing assets and offer new services.”

Azea is already backed by top-tier venture capital investors from both the US and the UK, including Accel Partners, Atlas Venture, and Quester. The company reveals that new investor TVM Capital of Munich, Germany, and Boston, MA, has led this additional funding round to provide working capital for further technical and business development and to take the company to profitability.

“We have been impressed by Azea’s experienced team and their innovative approach to solving the global telecommunications operators’ dilemma of achieving profitable growth,” says Chris Cobbold of TVM Capital, who has also joined Azea’s board of directors.

Scott White, chief executive officer of Azea, observes, “This funding round reflects the continued confidence of our existing investors, and validation from new investor TVM Capital of the submarine optical upgrade market opportunity and Azea’s potential for growth.”

I will be writing an overview of Azea in a forthcoming post.


MPLS-TE and network traffic engineering

April 2, 2007

In my post entitled The rise and maturity of MPLS I said that one of the principle reasons for a carrier to implement MPLS was the need for what is known as traffic engineering in their core IP networks. Before the advent of MPLS this capability was supplied by ATM with many of the world’s terrestrial Internet backbones being based on this technology. ATM provided a switching capability in Points of Presence(PoPs) that enabled the automatic switchover to an alternative inter-city ATM pipe in case of failure. I say ‘intercity’ because ATM was not generally implemented on a transoceanic basis because ATM was deemed to be expensive and inefficient due to its 17% overhead commonly known as cell tax (Picture: Aria networks planning software. (Presentation to the UK Network Operators Forum 2006).IP engineers were keen to remove this additional ATM layer and replace it with it a control capability which became MPLS. However MPLS in its original guise did not really ‘cut the mustard’ for use in a traffic engineered regime so the standard was enhanced through the release of extensions known as MPLS-TE. This post will look at Traffic Engineering and its sibling activity Capacity Planning and their relationship to MPLS.

Capacity planning

Capacity planning is an activity that is undertaken by all carriers on a ‘regular’ basis. Nowadays, this would most likely be undertaken on an annual basis, although in the heady days of the latter half of the 90s of heavy growth it was unlikely to be undertaken on less than a quarterly basis.

Capacity planning is an important function as it directly drives the purchase of additional network equipment or the purchase or building of addition bandwidth capacity for the physical network. Capacity planning is undertaken by the planning department and principally consists of the following activities:

Network topology: The starting point of any planning exercise is to profile the existing network to act as a benchmark to build upon. This consists of two things. The first is a complete database of the nodes or PoPs and network link bandwidths i.e. their maximum capacities. This sounds easier than it is in reality. In many instances carriers do not know the full extent of the equipment deployed which is often the result of one too many acquisitions. This discovery of assets can either be based an on-going manual spreadsheet or database exercise or software can be used to automatically discover the up to date installed network topology. Another way is the export network configurations from network equipment such as routers.

Traffic Matrices: What is needed next is detailed link resource utilisation data or traffic profiles for each service type. This is often called traffic Matrices. Links are the pipes interconnecting a network’s PoPs and their utilisation is how much of the links’ bandwidth is being used. As IP traffic is very dynamic and varies tremendously according to the time of day, good traffic engineering leads to good operational practice such as never loading a link beyond a certain percentage – say 50%. Every carrier would have their own standard which could be quite easily be made higher to save money but could risk poor network performance at peak times. Clearly, engineers and accountants have different perspectives in these discussions! (Raw IP traffic flow: Credit. Cariden.)

Demand Forecast: At this point, capacity planning engineers make a request to their product marketing and sales brethren with a request for a service sales forecast for the next planning cycle which could between one and three years. If you talk to any planning engineer I’m sure you will hear plaintive cries such as “I can’t plan unless I get a forecast” however, can you think of a worse group of individuals to get this sort of information from than sales people? I would guess that this is one of the biggest challenges planning departments face.

Once topology, current traffic matrices and forecasts for each service (IP transit, VoIP, IP VPNs IPTV etc.) has been obtained then the task of planning for the next capacity planning period can begin. This results in – or should result in a clear plan for the company that covers such issues as:

  • What existing link upgrades are required
  • What new links are required
  • What new or expansion to backup links are required
  • What new Capital Expenditure (CAPEX) is required
  • What increase in Operational Expenditure (OPEX) is required
  • What new migration or changeover plans are required
  • Lots of management reports, spreadsheets and graphs
  • Caveats about the on-going unreliability of the growth forecasts received!

Traffic engineering (TE)

While Capacity Planning is a long-term forward looking activity that is concerned with optimising network growth and performance in the face of growing service demand, traffic engineering is focused on how the network performs in respect of delivering services at a much finer granularity.

Traffic engineering in networks has a history as long as telephones have been around and was closely associated with A.K. Erlang. One of the fundamental metrics in the voice Public Switched Telephony Networks (PSTN) was named after him- the Erlang. An Erlang is a measure of the occupancy or utilisation of voice links regardless of whether traffic was flowing is not. Erlang-based calculations were / are used to calculate Quality of Service (QoS) and the optimum utilisation of fixed bandwidth links taking into account the amount of traffic at peak times.

Traffic engineering is even more important in the highly dynamic world of IP networks and carriers are able to experience a considerable number of benefits if traffic engineering is taken seriously by management.

  • Cost optimisation: Providing network links is an expensive pastime by the time you take IP equipment, optical equipment, OSS costs and OPEX into account. The more that a network is fully utilised without degradation, the more money can flow to the bottom line.
  • Congestion management: If a network is badly traffic engineered either through under-planning, under-spending or under-resourcing, the more chance there is for network problems such as outages or congestion to impact a customer’s experience. The telecoms world is stuffed full of examples of where this has happened.
  • Dynamic services and traffic profiles: Traffic profiles and flow can change quite considerably over a period of time when new services with different traffic profiles are launched without involving network planners. In an age when when there is considerable management pressure to reduce new time-to-market timescales this can happen more often that many companies would admit to.
  • Efficient routing: In MPLS and the limitations of the Internet I wrote about about one of the strengths of the IP protocol was that a packet could always find a path to its destination if one existed but that strength created problems when the service required predictable performance. Traffic engineered networks provide paths for critical services that are deterministic / predictable from a path perspective and from a Quality of Service (QoS) perspective. It would not be an overstatement to say that this is pretty much mandatory in these days of converged Next Generation networks.
  • Availability, resilience and fast restoration: If a network’s customers sees an outage at any time, the consequences can be catastrophic from a churn or brand image perspective so high Availability is a crucial network metric that needs to be monitored.. There is a tremendous difference in perceived reliability between PSTN voice network and IP networks. For example, tell me the last time your home telephone broke down? It’s not that PSTN networks are more reliable that IP networks, they’re not, it’s just that those PSTN networks have been better designed to transparently work around broken equipment or broken links. Subscribers, to use that old telephony term, are blissfully unaware of a network outage. Of course, if a digger cuts through a major fibre and the SDH backbone ring that is not actually a ring… Well, that’s another story.
  • QoS and new services: Real time services need an ability to separate latency-critical services such as VoIP from non-critical services such as email. Traffic engineering is a critical tool in achieving this.

Multi-protocol Label Switching – Traffic Engineering (MPLS-TE)

The term ‘-TE’ is used in relation to describe other services as well, notably in the attempts to make Ethernet carrier grade in quality which is discussed in my posts on PBB-TE and T-MPLS which is being built on the back of MPLS-TE (Picture credit OpNet planning software).

As mentioned above, before the advent of MPLS-TE, carriers of IP traffic relied on the underlying transport for traffic engineering – networks such as ATM. MPLS-TE consisted of a set of extensions to MPLS that enabled native traffic engineering within an MPLS environment. Of course, this does not remove the need to traffic engineer any layer-1 transport network that MPLS may be transported over. MPLS-TE was covered in in the IETF RFC 2702 standard.

What does MPLS-TE bring to the TE party?

(1) Explicit or constraint-based end-to-end routing: The picture below shows a small network where traffic flowing from the left could travel via two alternative paths to exit on the right. This was specifically the environment that Internal Gateway Routing (IGP) routing algorithms such as Open shortest Path (OSPG) and Intermediate System – Intermediate System (IS-IS) were designed to operate in by specifically routing all traffic over the shortest path as shown below (Picture credit: NANOG).

This inevitably could lead to problems with the north path shown above becoming congested while the south path remains unused wasting expensive network assets. Before MPLS-TE, standard IGP routing algorithm metrics could be ‘adjusted’, ‘manipulated’ or ‘tweaked’ to reduce this possibility, however, doing this could be very complicated and be very challenging to manage on a day-to-day basis. Such an approach usually required a network-wide plan. In other words, it is a bit of a horror to manage.

With MPLS-TE, using an extension to signalling protocol Resource Reservation Protocol (RSVP) known as, not unsurprisingly as RSVP-TE, explicit paths can be set up to force selected traffic flows to flow over through them as shown below.

This deterministic routing helps with reducing congestion on particular links, helps load the network more evenly thus reducing the number of ‘orphaned links’, ensures optimal utilisation of the network, helps planners separate latency dependent services from non-critical services and better manage upgrade costs (Picture credit: NANOG).

These paths are called TE tunnels or label switched paths (LSPs). LSPs are unidirectional so two need to be specified to handle bi-directional traffic between two nodes.

(2) Constraint Based Routing: Network planners are now able to undertake what is known as Constraint Based Routing where traffic paths can be computed (aka, Path Computation) that meets certain constraints other than the path with the least number of nodes or PoPs as drives OSPF and IS-IS. This could be links with the least utilisation, least delay, with the most free bandwidth, or links that utilise a carriers own, rather than a partner’s infrastructure .

(3) Bandwidth reservation: MPLS-TE DiffServe (DS-TE) enables per-class TE across an MPLS-TE network. Physical interfaces and TE tunnels / LSPs can be told how much bandwidth can be reserved or used. This can be used to dynamically allocate, share and adjust over time bandwidth given to critical services such as VoIP and to best effort traffic such as Internet browsing and email.

(4) Fast Re-Route (FRR): MPLS-TE supports local rerouting around a faulty node (node protection) or faulty link (Link Protection). Planners can define alternative paths to be used when failure occurs. FRR can reroute traffic in tens of milliseconds minimising down time. However, although FRR sounds like a good idea, the amount of computing effort required for calculating FRR paths for a complete network is very significant. If a carrier does not have the appropriate path computation tools, using FRR could cause significant problems by rerouting traffic non-optimally to segment of network that is already congested rather than a segment that is under-utilised (Picture: An LSP tunnel and its backup, Credit: Wandl).

There are other additions to MPLS covered by the MPLS-TE extensions but these are minor compared to ones described above.

Practical use of MPLS-TE

One would imagine that with all the benefits that would accrue to a carrier by using MPLS-TE such as enhancing service quality, enhancing new service deployment and reducing risk, carriers would flock to using MPLS-TE. However, this is not necessarily the case as, with all new technologies, there are alternatives such as:

  • Traditional over-provisioning: Traffic engineering management can be a very complicated task if you attempt to analyse all the flows within a large network. One of the traditional ways that many carriers get round this onerous and challenging task is to simply well over-provision their networks. If a network is geographically constrained or is simply simple, then throwing bandwidth at the network can be seen as a simple and unchallenging solution. Network equipment is so much cheaper than it used to be (and smaller carriers can buy equipment from eBay – cough, cough!). Dark fibre or multiGbit/s links can be bought or leased relatively cheaply as well. So why bother putting in the effort to traffic engineer a network properly?
  • The underlying network does the TE: Many carriers still use the underlying network to traffic engineer their networks. Although ATM is not around as much as it used to be, SDH still is.
  • Stick with IGP adjustment: Many carriers still stick to simple IGP metric adjustment discussed earlier as it handles the simple TE activities they require. True, many would moan about how difficult to manage this is but to migrate to an MPLS-TE environment could be seen as a costly exercise and they currently do not have the time, resource or money to undertake the transitiuon.
  • Let’s wait and see: There are so many considerations and competitive options that the easiest decision to make is to do nothing.

Round up

IP traffic engineering is a hot subject and brings forth a considerable variety of views and emotions when discussed. Many carriers have stuck with methods that they have outgrown but are hesitant about making the jump thinking that something better will come along. Many try to avoid the issue completely by simply over-provisioning and taking a an an ultra-KISS approach.

However, those carriers that are truly pursuing a converged Next Generation architecture with all services based on IP and legacy services carried by pseudowire tunnels, cannot avoid the issue of undertaking real traffic engineering to the degree undertaken by the old PSTN planning departments. To a great extent, this could be seen as the wild child of IP networks growing up to become a mature and responsible adults. The IP industry has a long way to go though as creating standards can be seen as difficult enough but getting people to use them is something else!

Whatever else, simply sitting and waiting is not the solution…

Aria Networks (UK)

the economics of network control (USA)

Making networks perform (USA)
You have one network, We have one plan (USA)

Wandl wide area design laboratory (USA)

Addendum #1: The follow-on article to this post is: Path Computation Element (PCE): IETF’s hidden jewel

Addendum #2: GMPLS and common control

Addendum #3: Aria Networks shows the optimal path


Follow

Get every new post delivered to your Inbox.