The tale of DOCSIS and cable operators

May 2, 2007

When anyone that uses the Internet on a regular basis is presented with an opportunity to upgrade their access speeds they will usually jump at the opportunity without a second thought. There used to be a similar analogy with personal computers with operating systems and processor speeds, but this is a less common trend these days as the benefits to be gained are often ephemeral as we have recently seen with Microsoft’s Vista. (Picture: SWINOG)

However, the advertising headline for many ISPs still focuses on “XX Mbit/s for as little as YY Pounds/month”. Personally, in recent years, I have not seen too many benefits in increasing my Internet access speed because I see little improvement when browsing normal WWW sites as their performance are not now bottlenecked by my access connection but rather the performance of servers. My motivation to get more bandwidth into my home is the need to have sufficient bandwidth – both upstream and downstream – to support my family’s need to use multiple video and audio services at the same time. Yes, we are as dysfunctional as everyone else with computers in nearly every room of the house and everyone wanting to do their own video or interactive thing.

I recently posted an overview of my experience of Joost, the new ‘global’ television channel recently launched by Skype founders, Niklas Zennstrom and Janus Friis – Joost’s beta – first impressions and it’s interesting to note that as a peer-to-peer system it does require significant chunks of your access bandwidth as discussed in Joost: analysis of a bandwidth hog.

The author’s analysis shows that it “pulls around 700 kbps off the internet and onto your screen” and “sends a lot of that data on to other users – about 220 kbps upstream”. If Joost is a window on the future of the IPTV on the Internet, then its should be of concern to the ISP and carrier communities and it should also be of concern to each of us that uses it. 220kbits/s is a good chunk of of the 250kbit/s upstream capability of ADSL-based broadband connections. If the upstream channel is clogged, response time on all services being accessed will be affected. Even more so if several individuals are are access Joost of a single broadband connection.

It’s these issues that make me want to upgrade my bandwidth and think about the technology that I could use to access the Internet. In this space there has been an on-going battle for many years between twisted copper pair ADSL or VDSL used by incumbent carriers and cable technology used by competitive cable companies such as Virgin Media to deliver Internet to your home.

Cable TV networks (CATV) have come a long way since the 60s when they were based on simple analogue video distribution over coaxial cable. These days they are capable of delivering multiple services and are highly interactive allowing in-band user control of content unlike satellite delivery that requires a PSTN based back-channel. The technical standard that enables these services is developed by CableLabs and is called Data Over Cable Service Interface Specification (DOCSIS). This defines the interface requirements for cable modems involved in high-speed data distribution over cable television system networks.

The graph below shows the split between ADSL and Cable based broadband subscribers: (Source: Virgin Media) with Cable trailing ADSL to a degree. The link provided provides an excellent overview of the UK broadband market in 2006 so I won’t comment further here.

A DOCSIS based broadband cable system is able to deliver a mixture of MPEG-based video content mixed with IP enabling the provision of a converged service as required in 21st century homes. Cable systems operate in a parallel universe, well not quite, but they do run a parallel spectrum enclosed within their cable network isolated from the open spectrum used by terrestrial broadcasters. This means that they are able to change standards when required without the need to consider other spectrum users as happens with broadcast services.

The diagram below shows how the spectrum is split between upstream and downstream data flows (Picture: SWINOG) and various standards specify the data modulation (QAM) and bit-rate standards. As is usual in these matters, there are differences between the USA and European standards due to differing frequency allocations and standards – NTSC in the USA and PAL in Europe. Data is usually limited to between 760 and 860MHz.

The DOCSIS standard has been developed by CableLabs and the ITU with input from a multiplicity of companies. The customer premises equipment is called a Cable Modem and the Central Office (Head End) equipment is called the a cable modem termination system (CMTS).

Since 1997there have been various releases (Source: CableLabs) of the DOCSIS standard with the most recent being version 3.0 being released in 2006.

DOCSIS 1.0 (Mar. 1997) (High Speed Internet Access) Downstream: 42.88 Mbit/s and Upstream: 10.24 Mbit/s

  • Modem price has declined from $300 in 1998 to <$30 in 2004

DOCSIS 1.1 (Apr. 1999) (Voice, Gaming, Streaming)

  • Interoperable and backwards-compatible with DOCSIS 1.0
  • “Quality of Service”
  • Service Security: CM authentication and secure software download
  • Operations tools for managing bandwidth service tiers

    DOCSIS 2.0 (Dec. 2001) (Capacity for Symmetric Services) Downstream: 42.88 Mbit/s and Upstream:30.72 Mbit/s

    • Interoperable and backwards compatible with DOCSIS 1.0 / 1.1
    • More upstream capacity for symmetrical service support
    • Improved robustness against interference (A-TDMA and S-CDMA)

    DOCSIS 3.0 (Aug. ’06) Downstream: 160 Mbit/s and Upstream: 120 Mbit/s

    • Wideband services provided by expanding used bandwidth through the use of channel bonding e.g. instead of a single data channel being delivered over a single channel, they are multiplexed over a number of channels. ( A previous post talked about bonding in the ADSL world Sharedband: not enough bandwidth? )
    • Support of IPv6

    Roundup

    With the release of the DOCSIS 3.0 standard it looks like cable companies around the world are now set to be able to upgrade the bandwidth they will be able to offer to their customers in coming years. However, this will be an expensive upgrade for them to undertake with the need to upgrade head end equipment first and then followed by field cable modem upgrades over time. I would hazard a guess that it will be at least five years before the average cable user will be able to see the benefits.

    I also wonder about what price will need to be paid for the benefit of gaining higher bandwidth through channel bonding when there is limited spectrum available for data services on the cable system. A limit in subscriber number scalability?

    I was also interested to read about the possible adoption of IPv6 in DOCSIS 3.0. It was clear to me many years ago that IPv6 would ‘never’ (never say never!) on the Internet because of the scale of the task. It’s best chance would be in closed systems such as satellite access services and IPTV systems. Maybe, cable systems are an another option. I will catch up on IPv6 in a future post.


    Aria Networks shows the optimal path

    April 26, 2007

    In a couple of previous posts, Path Computation Element (PCE): IETF’s hidden jewel and MPLS-TE and network traffic engineering I mentioned a company called Aria Networks who are working in the technology space discussed in those posts. I would like to take this opportunity to write a little about them.

    Aria Networks are a small UK company that have been going for around eighteen months. The company is commercially led by Tony Fallows, the core technology has been developed by Dr Jay Perrett, Chief Science Officer and Head of R&D and Daniel King, Chief Operating Officer and their CTO is Adrian Farrel. Adrian currently co-chairs the IETF Common Control and Measurement Plane (CCAMP) working group that is responsible for GMPLS and also co-chairs of the IETF Path Computation Element (PCE) working groups.

    The team at Aria have brought some very innovative software technology the products they supply to network operators and the network equipment vendors. Their raison d’etre, as articulated by Daniel King, is to “to fundamentally change the way complex, converged networks are designed, planned and operated”. This is an ambitious goal, so let’s take a look at how Aria plan to achieve this.

    Aria currently supplies software that addresses the complex task of computing packet constrain-based paths across an IP or an MPLS network and optimising that network holistically and in parallel. Holistic is a key word in respect of understanding Aria products. It means that when an additional path needs to be computed in a network, the whole network and all the services that are running over it are recalculated and optimised in a single calculation. A simple example of why this is so important is shown here.

    This ability to compute holistically rather than on a piecemeal basis requires some very slick software as it is a very computationally intensive (‘hard’) computation that could easily take many hours using other systems. Parallel is the other key word. When an additional link is added to a network there could be a knock-on effect to any other link in the network, therefore re-computing all the paths in parallel – both existing and new – is the only way to ensure a reliable and optimal result is achieved.

    Traffic engineering of IP, MPLS or Ethernet networks could quite easily be dismissed by the non-technical management of a network operator as an arcane activity but, as anyone with experience of operating networks can vouch, good traffic engineering brings pronounced benefits that directly affects the reduction of costs while increasing the positive aspects of customers’ experience of using services. Of course, lack of appropriate traffic engineering activity has the opposite effect. Only one thing could be put above traffic engineering to better achieve a good brand image and that is good customer service. The irony is that if money is not spent on good traffic engineering, ten times the amount would need to be spent on call centre facilities papering over the cracks!

    One quite common view held by a number of engineers is that traffic engineering is not required because they say “we throw bandwidth at our network”. If a network has an abundance of bandwidth then in theory there will never be any delays caused by an inadvertent overload of a particular link. This may be true, but it is certainly an expensive and short sighted solution and one that could turn out to risky as new customers come on board. Combine it with the often slow provisioning times often associated with adding additional optical links can cause major network problems. The challenge of planning and optimising if significantly increased in a Next Generation Network (NGN) when traffic is actively segmented into different traffic classes such as real time VoIP and best-effort Internet access. Traffic engineering tools will become an even more indispensable tool than they have in the past.

    It’s interesting to note that even if a protocol like MPLS-TE, PBT, PBB-TE or T-MPLS has all the traffic bells and whistles any carrier may ever desire, it does not mean they can be used. Using TE extensions such as Fast ReRoute (FRR) need sophisticated tools or they quickly become unmanageable in a real network.

    Aria’s product family is called intelligent Virtual Network Topologies (iVNT). Current products are aimed at network operators that operate IP and / or MPLS-TE based networks.

    iVNT MPLS-TE enables network operators design, model and optimise MPLS – Traffic Engineered (MPLS-TE) networks that use constraint based point-to-point Labelled Switched Paths (LSPs), constraint based point-to-multipoint LSPs, Fast-Reroute (FRR) bypass tunnels. One of its real strengths is that it goes to town on supporting any type of constraint that could be placed on a link – delay, hop count, cost, required bandwidth, link-layer protection, path disjointedness, bi-directionality, etc. Indeed, it quite straight forward to add any addition constraints that an individual carrier may need.

    iVNT IP enables network operators to design, model and optimise IP and Label Distribution Protocol (LDP) networks based on the metrics used in Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System (IS-IS) Interior Gateway Protocols (IGPs) to ensure traffic flows are correctly balanced across the network. Although not using more advanced traffic engineering capabilities is clearly not the way to go in the future, many carriers still stick with ‘simple’ IP solutions – however they are far from simple in practise and can be an operational nightmare to manage.

    What makes Aria software so interesting?

    To cut to the chase, it’s the use of Artificial Intelligence (AI) when applied to path computation and network optimisation. Conventional algorithms used in network optimisation software are linear in nature and are usually deterministic in that they produce the same answer for the same set of variables every time they are run. They are usually ‘tuned’ to a single service type and are often very slow to produce results when faced with a very large network that uses many paths and carries many services. Aria’s software approach may produce different results but correct, results each time it is run and is able to handle multiple services that are inherently and significant different from a topology perspective e.g. point-to-point (P2P), Point -to-Multipoint (P2MP) based services and mesh-like IP-VPNs etc.

    Aria uses evolutionary and genetic techniques which are good at learning new problems and running multiple algorithms in parallel. The software then selects which algorithm is the better at solving the particular problem they are challenged with. The model evolves multiple times and quickly converges on the optimal solution. Importantly, the technology is very amenable to use parallel computing to speed up processing of complex problems such as required in holistic network optimisation.

    It generally does not make sense to use the same algorithm to solve all the network path optimisation needs of different services – iVNT runs many in parallel and self selection will show which is the most optimal for the current problem.

    Aria’s core technology is called DANI (Distributed Artificial Neural Intelligence) and is a “flexible, stable, proven, scalable and distributed computation platform”. DANI was developed by two of Aria’s Founders, Jay Perrett and Daniel King and has had a long proving ground in the pharmaceutical industry for pre-clinical drug discovery which needs the analysis of millions of individual pieces of data to isolate interesting combinations. The company that addresses the pharmaceutical industry is Applied Insilico.

    Because of the use of AI, iVNT is able to compute a solution for a complex network containing thousands of different constraint-based links, hundreds of nodes and multiple services such as P2P LSPs, Fast Reroute (FRR) links, P2MP links (IPTV broadcast) and meshed IP-VPN services in just a few minutes on one of today’s norebooks.

    What’s the future direction for Aria’s products?

    Step to multi-layer path computation: As discussed in the posts mentioned above, Aria is very firmly supportive of the need to provide automatic multi-layer path computation. This means that the addition of of a new customer’s IP service will be passed as a bandwidth demand the MPLS network and downwards to the GMPLS controlled ASTN optical network as discussed in GMPLS and common control.

    Path Computation Element (PCE): Aria are at the heart of the development of on-line path computation so if this is a subject of interest to you then give Aria a call.

    Two product variants address this opportunity:

    iVNT Inside is aimed at Network Management System (NMS) vendors, Operational Support System (OSS) vendors and Path Computation Element (PCE) vendors that have a need to provide advanced path computation capabilities embedded in their products.

    iVNT Element is for network equipment vendors that have a need to embed advanced path computation capabilities in their IP/MPLS routers or optical switches.

    Roundup

    Aria Networks could be considered to be a rare company in the world of start-ups. It has a well tried technology whose inherent characteristics are admirably matched to the markets and the technical problems it is addressing. Its management team are actively involved in developing the standards that their products are, or will be, able to support. This provides no better basis to get their products right.

    It is early days for carriers turning in their droves to NGNs and it is even earlier days for them to adopt on-line PCE in their networks, but Aria’s timing is on the nose as most carriers are actively thinking about these issues and are actively looking for tools today.

    Aria could be well positioned to benefit from the explosion of NGN convergence as it seems – to me at least – that fully converged networks will very challenging to design, optimise and operate without the new approach and tools from companies such as Aria.

    Note: I need to declare an interest as I worked with them for a short time in 2006.


    PONs are anything but passive

    April 18, 2007

    Passive Optical Networks (PONs) are an enigma to me in many ways. On one hand the concept goes back to late 1980s and has been floating around ever since with obligatory presentations from the large vendors whenever you visited them. Yes, for sure there are Pacific Rim countries and the odd state or incumbent carrier in the western world deploying the technology, but they never seemed to impact my part of the world.On the other hand, the technology would provide virtual Internet nirvana for me at home with 100Mbit/s second available to support video on demand to each member of my family who, in 21st century fashion, have computers in virtually every home of the house! This high bandwidth still seems as far away as ever with an average speed of 4.5Mbit/s downstream bandwidth in the UK. I see 7Mbit/s as I am close to a BT exchange. We are still struggling to deliver 21st century data services over 19th century copper wires using ATM-based Digital Subscriber Line (DSL) technology. If you are in the right part of the country, you get marginally higher rates from your cable company. Why are carriers not installing optical fibre to every home?

    To cut to the chase, it’s because of the immense costs of deploying it. Fibre to the Home (FTTH) as it is known, requires the installation of a completely new optical fibre infrastructure between the carrier’s exchanges and homes. Such an initiative would almost require a government led and paid for initiative to make it worthwhile – which of course is what has happened in the far east. Here in the UK this is further clouded by the existing cable industry which has struggled to reach profitability based on massive investments in infrastructure during the 90s.

    What are Passive Optical Networks (PONs)?

    The key word is passive. In standard optical transmission equipment, used in the core of public voice and data networks, all of the data being transported is switched using electrical or optical switches. This means that investment needs to be made in the network equipment to undertake that switching and that is expensive. In a PON, instead of electrical equipment joining or splitting optical fibres, fibres are just welded together at minimum cost – just like T-junctions in domestic plumbing. Light travelling down the fibre then splits or joins when it hits a splice. Equipment in the carrier’s exchange (or Central Office [CO]), and the customer’s home then multiplex or de-multiplex an individual customer’s data stream.

    Although the use of PONs considerably reduces equipment costs as no switching equipment is required in the field and hence no electrical power feeds are required, it is still an extremely expansive technology to deploy making it very difficult to create a business case that stacks up. A major problem is that there is often no free space available in existing ducts pushing carriers to a new digs. Digging up roads and laying the fibre is a costly activity. I’m not sure what the actual costs are these days, but £50 per metre dug used to be the cost many years ago.

    As seems to be the norm in most areas of technology, there are two PON standards slugging it out in the market place with a raft of evangelists attached to both camps. The first is Gigabit PON (GPON) and the second is Ethernet PON (EPON).

    About Gigabit PON (GPON)

    The concept of PONs goes back to the early 1990s to the time when the carrier world was focussed on a vision of ATM being the world’s standard packet or cell based WAN and LAN transmission technology. This never really happened as I discussed in The demise of ATM but ATM lives on in other services defined around that time. Two examples are broadband Asynchronous DSL (ADSL) and the lesser known ATM Passive Optical Network (APON).

    APON was not widely deployed and was soon superseded with the next best thing – Broadband PON (BPON) also known as ITU-T G.983 as it was developed under the auspices of the ITU. More importantly APON was limited to the number of data channels it could handle and BPON added Wave Division Multiplex (WDM) (covered in Technology Inside in Making SDH, DWDM and packet friendly). BPON uses one wavelength for 622Mbit/s downstream traffic and another for 155-Mbit/s upstream traffic.

    If there are 32 subscribers on the system, that bandwidth is divided among the 32 subscribers-plus overhead. Upstream, a BPON system provides 3 to 5 Mbits/sec when fully loaded.

    GPON is the latest upgrade from this stable, uses an SDH data framing standard and provides a data rate of 2.5Gbit/s downstream and 1.25-Gbit/s upstream. The big technical difference is that GPONs are based on Ethernet and IP rather than ATM.

    It is likely that GPON will find its natural home in the USA and Europe. An example is Verizon who is deploying 622Mbit/s BPON to its subscribers but is committed to upgrade to GPON within twelve months. In the UK, BT’s OpenReach has selected GPON for a trial.

    About Ethernet PON (EPON)

    EPONs comes from the IEEE stable and is called IEEE 802.3ah. EPONs are based on Ethernet standards and derives the benefits of using this commonly adopted technology. EPON only uses a single fibre between the subscriber split and the central office and does not require any power in the field such as needed if a kerb-side equipment was required. EPON also supports downstream Point to Multipoint (P2MP) broadcast which is very important for broadcasting video. As with carrier-grade Ethernet standards such as PBB, some core Ethernet features such as CSMA/CD have been dropped in this new use of Ethernet. Only one subscriber at a time is able to transmit at any time using a Time Division Multiplex Access (TDMA) protocol.

    Typical deployment is shown in the picture below, one fibre to the exchange connecting 32 subscribers.

    EPON architecture (Source: IEEE)

    A Metro Ethernet Forum overview of EPON can be found here.

    The Far East, especially Japan, has taken EPON to its heart with the vast majority being installed by NTT, the major Japanese incumbent carrier, followed by Korea Telecom with 100s of thousands of EPON connections.

    Roundup

    There is still lots of development taking place in the world of PONs. On one hand 10Gbit/s EPON is being talked about to give it an edge over 2.5Gbit/s GPON. On the other, WDM PONs are being trialled in the Far East which would enable far higher bandwidths to be delivered to each home. WDM-PON systems allocate a separate wavelength to each subscriber, enabling the delivery of 100 Mbits/s or more .

    Only this month it was announced that a Japanese MSO Moves 160 Mbit/s using advanced cable technology (the subject of a future TechnologyInside post).

    DSL based broadband suffers from a pretty major problem. The farther the subscriber is away their local exchange, the lower the data rate that can be supported reliably, PONs do not have this limitation (well technically they do but the distance is much greater). So in the race to increase data rates in the home PONs are a clear cut winner along with cable technologies such as DOCSIS 3.0 used by cable operators.

    Personally, I would not expect that PON deployment will increase over and above its snail-like pace in Europe at any time in the near future. Expect to see the usual trials announced by the largest incumbent carriers such as BT, FT and DT but don’t hold your breath waiting for it to arrive at your door. This has been questioned recently in a government report where the lack of high-speed internet access could jeopardise the UK’s growth in future years.

    You may think so what – “I’m happy with 2 – 7Mbit/s ADSL!”, but I can say with confidence that you should not be happy. The promise of IPTV services are really starting to be delivered at long last and encoding bandwidths of 1 to 2Mbit’s really do not cut the mustard in the quality race. This is case for standard, let alone for high definition TV. Moreover, with each family member having a computer and television in their own room and each wanting to watch or listen to their own programmes simultaneously, low speed ADSL connections are far from adequate.

    One way out of this is to bond multiple DSL lines together to gain that extra bandwidth. I wrote a post a few weeks ago – Sharedband: not enough bandwidth? – who provides software to do just this. The problem is that you would require an awful lot of telephone lines to get 100Mbit/s which is what I really want! Maybe I should emigrate?

    Addendum #1: Economist Intelligence Unit: 2007 e-readiness rankings


    GMPLS and common control

    April 16, 2007

    From small beginnings MultiProtocol Label Switching (MPLS) has come a long way in ten years. Although there are a considerable number of detractors who believe it costly and challenging to manage, it has now been deployed in just about all carriers around the world in one guise or another (MPLS-TE) as discussed in The rise and maturity of MPLS. Moreover, it is now extending its reach down the stack into the optical transmission world through activities such as T-MPLS covered in PBB-TE / PBT or will it be T-MPLS? (Picture: GMPLS: Architecture and Applications by Adrian Farrel and Igor Bryskin).In the same way that early SDH standards did not encompass appropriate support for packet based services as discussed in Making SDH, DWDM and packet friendly, initial MPLS standards were firmly focussed on IP networks not for use with optical wavelength or TDM switching.

    The promise of MPLS was to bring the benefits of a connection-oriented regime to the inherently connectionless word of IP networks and be able to send traffic along pre-determined paths thus improving performance. This was key for the transmission of real time or isochronous services such as VoIP over IP networks. Labels attached to packets enabled the creation of Label Switched paths (LSPs) which packets would follow through the network. Just as importantly, it was possible to specify the quality of service (QoS) of an LSP thus enabling the prioritisation of traffic based on importance.

    It was inevitable that MPLS would be extended to enable it to be applied to the optical world and this is where the IETF’s Generalised MPLS (GMPLS) standards comes in. Several early packet and data transmission standards bundled together signalling and data planes in vertical ‘stove-pipes’ creating services that needed to be managed from top to bottom completely separately from each other.

    The main vision of GMPLS was to create a common control plane that could be used across multiple services and layers thus considerably simplifying network management by automating end-to-end provisioning of connections and centrally managing network resources. In essence GMPLS extends MPLS to cover packet, time, wavelength and fibre domains. A GMPLS control plane also lies at the heart of T-MPLS replacing older proprietary optical Operational Support Systems (OSS) supplied by optical equipment manufacturers. GMPLS provides all the capabilities of those older systems and more.

    GMPLS is also often referred to as Automatic Switched Transport Network (ASTN) although GMPLS is really the control plane of an ASTN.

    GMPLS extends MPLS functionality by creating and provisioning:

    • Time Division Multiplex (TDM) paths, where time slots are the labels (SONET / SDH).
    • Frequency Division Multiplex (FDM) paths, where optical frequency such as seen in WDM systems is the label.
    • Space Division Multiplexed (SDM) paths, where the label indicates the physical position of data – photonic Cross-connects
    Switching Domain Traffic Type Forwarding Scheme Example of Device
    Packet, cell IP, ATM Label IP router, ATM switch
    Time TDM SONET/SDH Time slot Digital cross-connects
    Wavelength Transparent Lambda DWDM
    Physical space Transparent Fiber, line OXC

    GMPLS applicability

    GMPLS has extended and enhanced the following aspects of MPLS:

    • Signalling RSVP-TE and CR–LDP
    • Routing protocols – OSPF–TE and IS-IS-TE

    GMPLS has also added:

    • Extensions to accommodate the needs of SONET / SDH and optical networks.
    • A new protocol, link-management protocol (LMP), to manage and maintain the health of the control and data planes between two neighbouring nodes. LMP is an IP-based protocol that includes extensions to RSVP–TE and CR–LDP.

    As GMPLS is used to control highly dissimilar networks operating at different levels in the stack, there are a number of issues it needs to handle in a transparent manner:

    • It does not just forward packets in routers, but needs to switch in time, wavelength or physical ports (space) as well.
    • It should work with all applicable switched networks OTN, SONET / SDH, ATM and IP etc.
    • There are still many switches that are not able to inspect traffic and thus not able to extract labels – this is especially true for TDM and optical networks.
    • It should facilitate dissimilar network interoperation and integration.
    • Packet networks work at a finer granularity than optical networks – it would not make sense to allocate a 622Mbit/s SDH link to a 1Mbit/s video IP stream by mistake.
    • There is a significant difference in scale between IP and optical networks from a control perspective – optical networks being much larger with thousands of wavelengths to manage.
    • There is often a much bigger latency in setting up an LSP on an optical switch than there is on an IP router.
    • SDH and SONET systems can undertake a fast switch restoration in less than 50mS in case of failure – a GMPLS control plane needs to handle this effectively.

    Round-up

    GMPLS / ASTN is now well entrenched in the optical telecommunications industry with many, if not most, of the principle optical equipment manufacturers demonstrating compatible systems.

    It’s easy to see the motivation to create a common control plane (GMPLS was defined under the auspices of the IETF’s Common Control and Measurement Plane (ccamp) working group) as it would would considerably reduce the complexity and cost of managing fully converged Next Generation Networks (NGNs). Indeed, it is hard to see how any carrier could implement a real converged network without it.

    As discussed in Path Computation Element (PCE): IETF’s hidden jewel converged NGNs will need to compute service paths across multiple networks, across multiple domains and automatically pass service provision at the IP layer down to optical networks such as SDH and ASTN. Again, it is hard to see how this vision can be implemented without a common control plane and GMPLS.

    To quote the concluding comment in GMPLS: The Promise of the Next-Generation Optical: Control Plane (IEEE Communiction Magazine July 2005 Vol.43 No.7):

    “we note that far from being abandoned in a theoretical back alley, GMPLS is very much alive and well. Furthermore, GMPLS is experiencing massive interest from vendors and service providers where it is seen as the tool that will bring together disparate functions and networks to facilitate the construction of a unified high-function multilayer network operators will use as the foundation of their next-generation networks. Thus, while the emphasis has shifted away from the control of transparent optical networks over the last few years, the very generality of GMPLS and its applicability across a wide range of switching technologies has meant that GMPLS remains at the forefront of innovation within the Internet. “


    Chaos in Bangladesh’s ‘lllegal’ VoIP businesses

    April 11, 2007

    Chaos in Bangladesh’s Illegal VoIP business

    Take a listen to a report on BBC Radio Four’s PM programme broadcast on the 9th April which talks about the current chaos in Bangladesh brought about by the enforced closure of ‘illegal’ VoIP businesses. This is one of the impacts of the state of emergency imposed three months ago and has resulted in a complete breakdown of the Bangladeshi phone network.

    It seems that VoIP calls accounts for up to 80% of telephone traffic from abroad in the country driven by low call rates of between 1 and 2pence per minute.

    The new military backed government has been waging war on small VoIP businesses with the “illegality and corruptions of the the past being too long tolerated”. Many officials have been arrested, buildings pulled down and businesses closed.

    The practical result has thrown the telephone industry into chaos as hundreds of thousands of Bangladeshi’s living abroad try to call home home only to get the engaged tone.

    “In many countries VoIP is legal but in Bangladesh it has been long rumoured that high profile politicians have been operating the VoIP businesses and had an interest in keeping them outside of the law and unregulated to avoid taxes on the enormous revenues they generated.”

    The report says that the number of conventional phone lines is being doubled in April but to 30,000 lines but with a population of over 140 million people this is too few!

    You can listen to the report here Chaos in Bangladesh’s Illegal VoIP business Copyright BBC

    It really is amazing how disruptive a real disruptive technology can be, but when this happens it usually comes back to bite us!

    I talked about the Sim Box issue in Revector, detecting the dark side of VoIP, and the Bangladesh situation provides the reasoning about why incumbent carriers are often hell bent on stamping VoIP traffic out. In the western world, the situation is no different, but governments and carriers do not just bulldoze the businesses – maybe they should in some cases!

    Addendum #1: the-crime-of-voice-over-ip-telephony/


    Path Computation Element (PCE): IETF’s hidden jewel

    April 10, 2007

    In a previous post MPLS-TE and network traffic engineering, I talked about the challenges of communication network traffic engineering and capacity planning and their relation to MPLS-TE (or MPLSTE). Interestingly, I realised that I did not mention that all of the engineering planning, design and optimisation activities that form the core of network management usually take place off-line. What I mean by this, is that a team of engineers sit down either on an ad hoc basis driven by new network or customer acquisitions or as part of an annual planning cycle to produce an upgrade or migration plan that can be used to extend their existing network to meet the needs of the additional traffic. This work does not impact live networks until the OPEX and CAPEX plans have been agreed and signed off by management teams and then implemented. A significant proportion of data that drives this activity is obtained from product marketing and/or sales teams who are supposed to know how much additional business, hence additional traffic, will be imposed on the network in the time period coved by planning activities.

    This long-term method of planning network growth has been used since the dawn of time and the process should put in place the checks and balances (that were thrown to the wind in the late 1990s) to ensure that neither too much nor too little investment is made in network expansion.

    What is Path Computation Element (PCE)

    What is a path through the network? I’ve covered this extensively in my previous posts about MPLS’s ability to guide traffic through a complex network and force particular packet streams to follow a constraint-based and pre-determined path from network ingress to network egress. This deterministic path or tunnel enables the improved QoS management of real-time services such as Voice over IP or IPTV.

    Generally paths are calculated and managed off-line as part of the overall traffic engineering activity. When a new customer is signed up, their traffic requirements are determined and the most appropriate paths for the traffic superimposed on the current network topology that would best meet the customer’s needs and balance traffic distribution on the network. If new physical assets are required, then these would be provisioned and deployed as necessary.

    Traditional planning cycles are traditionally focussed on medium to long term needs and cannot really be applied to shorter planning needs. Such short term needs could derive from a number of requirements such as:

    • Changing network configurations dependent on the time of day, for example, there is usually a considerable difference traffic profiles between office hours, evening hours and night time. The possibility of dynamically moving traffic dependent on busy hours (Time being the new constraint) could provide significant cost benefits.
    • Dynamic or temporary path creation based on customers’ transitory needs.
    • Improved busy hour management through auto-rerouting of traffic.
    • Dynamic balancing of network load to reduce congestion.
    • Improved restoration when faults occur.

    To be able to undertake these tasks a carrier would need to move away from off-line Path Calculation to On-line Path Calculation and this is where IETF’s Path Computation Element (PCE) Working Group comes to the rescue.

    In essence, on-line PCE software acts very much along the same lines a graphics chip handles off-loaded calculations for the main CPU in a personal computer. For example, a service requires that a new path be generated through the network and that request, together with the constrained-path requirements for the path such as bandwidth, delay etc., is passed to the attached PCE computer. The PCE has a complete picture of flows and paths in the network at the precise moment derived from other Operational Support Software (OSS) programmes so it can calculate in real time the optimal path through the network that will deliver the requested path. This path is then used to automatically update router configurations and Traffic engineering database.

    In practice, the PCE architecture calls for each Autonomous System (AS) domain to have its own PCE and if a multi-domain path is required the affected PCEs will co-operate to calculate the required path with the requirement provided by a ‘master’ PCE. The standard supports any combination, number or location of PCEs.

    Why a separate PCE?

    There are a number of reasons why a separate PCE is being proposed:

    • Path Computation of any form is not an easy and simple task by any means. Even with appropriate software, computing all the primary, back-up and services paths on a complex network will strain computing techniques to the extreme. A number of companies that provide software capable of undertaking this task were provided in the above post.
    • The PCE will need undertake computationally intensive calculations so it is unlikely (to me) that a PCE capability would ever be embedded into a router or switch as they generally do not have the power to undertake path calculations in complex network.
    • If path calculations are to be undertaken in a real-time environment then, unlike off-line software which can take hours for an answer to pop out, a PCE would needs to provide an acceptable solution in just a few minutes or seconds.
    • Most MPLS routers calculate a path on the basis of a single constraint e.g. the shortest path. Calculating paths based on multiple constraints such as bandwidth, latency, cost or QoS significantly increases the computing power required to reach a solution.
    • Routers route and have limited or partial visibility of the complete network, domain and service mix and thus are not able to undertake the holistic calculations required in a modern converged network.
    • In a large network the Traffic engineering database (TED) can become very large creating a large computational overhead for a core router. Moving TED calculations to a dedicated PCE server could be beneficial in lowering path request response times.
    • In a traditional IP network there may be many legacy devices that do not have an appropriate control plane thus creating visibility ‘holes’.
    • A PCE could be used to provide alternative restorative routing of traffic in an emergency. As a PCE would have a holistic view of the network, restoration using a PCE could reduce potential knock-on effects of a reroute.

    The key aspect of multi-layer support

    One of the most interesting architecture aspects of the PCE is to address a very significant issue faced by all carriers today – multi-network support. All carriers utilise multiple layers to transport traffic – these could include IP-VPN, IP, Ethernet, TDM, MPLS, SDH and optical networks in several possible combinations. The issue is that a path computation at the highest level inevitably has a knock-on effect down the hierarchy to the physical optical layer. Today, each of these layers and protocols are generally managed, planned and optimised as separate entities so it would make sense that when a new path is calculated, its requirements are passed down the hierarchy so that knock-on effects can be better managed. The addition of a new small IP link could force the need to add an additional fibre.

    Clearly, providing flow though and visibility of new services to all layers and manage path computation on a multi-layer basis would be a real boon for network optimisation and cost reduction. However, let’s bear in mind that this represents a nirvana solution for planning engineers!

    A Multi-layer path

    The PCE specification is being defined to provide this across layer or multi-layer capability. Note that a PCE is not a solution aimed at use on the whole Internet – clearly this would be a step just too challenging along the lines of the whole Internet upgrading IPV-6!

    I will not plunge into the deep depths of the PCE architecture here, but a complete overview can be found in A Path Computation Element (PCE) Based Architecture (RFC 4655). At the highest level the PCE talks to a signalling engine that takes in requests for a new path calculation and passes any consequential requests to other PCEs that might be needed for an inter-domain path. The PCE also interacts with the Traffic Engineering Database to automatically update it if and as required (Picture source: this paper).

    Another interesting requirement document is Path Computation Element Communication Protocol (PCECP) Requirements .

    Round up

    It is very early days for the PCE project, but it would seem to provide one of the key elements required to enable carriers to effectively manage a fully converged Next Generation Network. However, I would imagine that the operational management in many carriers would be aghast at putting the control of even transient path computation on-line when considering the risk and the consequence to customer experience if it went wrong.

    Clearly PCE architecture has to be based on the use of powerful computing engines, software that can holistically monitor and calculate new paths in seconds and most importantly be a truly resilient network element. Phew!

    Note: One of the few commercial companies working on PCE software is Aria Networks who are based in the UK and whose CTO, Adrian Farrell, is also Chairman of the PCE Working Group. I do declare an interest as I undertook some work for Aria Networks in 2006.

    Addendum #1: GMPLS and common control

    Addendum #2: Aria Networks shows the optimal path

    Addendum #3: It was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ or function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?

    Addendum #4: New web site focusing on PCE: http://pathcomputationelement.com


    iotum’s Talk-Now is now available!

    April 4, 2007

    In a previous post The magic of ‘presence’, I talked about the concept of presence in relation to telecommunications services and looked at different examples of how it had been implemented in various products.

    One of the most interesting companies mentioned was iotum, a Canadian company. iotum had developed what they called a relevance engine which enabled the provision of ability to talk and willingness to talk information into a telecom service by attaching it to appropriate equipment such as a Private Branch Xchanges (PBX) or a call centre Automatic Call Distribution (ACD) managers.

    One of the biggest challenges for any company wanting to translate presence concepts into practical services is how to make it useable rather than just being just a fancy concept that is used to describe a of a number peripheral and often unusable features of a service. Alec Saunders, iotum’s founder, has been articulating his ideas about this in his blog Voice 2.0: A Manifesto for the Future. Like all companies that have their genesis in the IT and applications world, Alec believes that “Voice 2.0 is a user-centric view of the world… “it’s all about me” — my applications, my identity, my availability.

    And rather controversially, if you come from the network or the mobile industry: “Voice 2.0 is all about developers too — the companies that exploit the platform assets of identity, presence, and call control. It’s not about the network anymore.” Oh by the way, just to declare my partisanship, I certainly go along with this view and often find that the stove-pipe and closed attitudes sometimes seen in mobile operators is one the biggest hindrances to the growth of data related applications on mobile phones.

    There is always a significant technical and commercial challenge to OEMing platform-based services to service providers and large mobile operators so the launch of a stand-alone service that is under complete control of iotum is not a bad way to go. Any business should have to full control of their own destiny and the choice of the relatively open Blackberry platform gives iotum a user base they can clearly focus on to develop their ideas.

    iotum launched the beta version of Talk-Now in January and provides a set of features that are aimed at helping Blackberry users to make better use of the device that the world has become addicted to using in the last few years. Let’s talk turkey, what does the Talk-Now service do?

    According to web site, as seen in the picture on the left, it provides a simple-in-concept bolt-on service for Blackberry phone users to see and share their availability status to other users.

    At the in-use end of the service, the Talk-Now service interacts with a Blackberry user’s address book by adding colour coding to contact names to show the individual’s availability. On initial release only three colours were used, white, red and green.

    Red and and green clearly show when a contact is either Not-Available or Available, I’ll talk about white in a minute. Yellow was added later, based on user feedback, to indicate an Interruptible status.

    The idea behind Talk-Now is that helps users reduce the amount of time they waste in non-productive calls and leaving voicemails. You may wonder how this availability guidance is provided by users. A contact with a white background provides the first indication of how this is achieved.

    Contacts with a white background are not Talk-Now users so their availability information is not available (!) so one of the key features of the service is an Invite People process to get them to use Talk-Now and see your availability information.

    If you wish a non-Talk-Now contact to see your availability, you can select their name from the contact list and send them an “I want to talk with you” email. This email will provide a link to an Availability Page as shown below. This email talks about the benefits of using the service (I assume) and asks you to use the service. This is a secure page that is only available to that contact and for a short time only.

    Once a contact accepts the invite and signs up to the service, you will be able to see their availability – assuming that they set up the service.

    So, how do you indicate your availability? This is set up with a small menu as shown on the left. Using this you can set up status information.

    Busy: set your free/busy status manually from your BlackBerry device

    In a meeting: iotum Talk-Now synchronizes with your BlackBerry calendar to know if you are in a meeting.

    At night: define which hours you consider to be night time.

    Blocked group: you can add contacts to the “blocked” group.

    You can also set up VIPs (Very Important Persons) who are individuals who receive priority treatment. This category needs to be used with care. Granting VIP status to a group overrides the unavailability settings you have made. You can also define Workdays. Some groups might be VIPs during work hours, while other groups might get VIP status outside of work. This is designed to help you better manage your personal and business communications.

    There is also a feature whereby you can be alerted when a contact becomes available by a message being posted on your Blackberry as shown on the right.

     

    Many of the above setting can be set up via a web page, for example:

    Setting your working week

    Setting contact groups

    However, it should be remembered that like Plaxo and LinkedIn, this web based functionally does require you to upload – ‘synchronise’ – your Blackberry contact list to the iotum server and many Blackberry users might object to this. It should be noted as well that the calendar is accessed as well to determine when you are in meetings and deemed busy.

    If you want to hear more, then take a look at the video that was posted after a visit with Alec Saunders and the team by Blackberry Cool last month:

    Talk-Now looks to be an interesting and well thought out service. Following traditional Web 2.0 principles, the service is provided for free today with the hope that iotum will be able to charge for additional features at a future date.

    I wish them luck in their endeavours and will be watching intensely to see how they progress in coming months.


    Follow

    Get every new post delivered to your Inbox.