IPv6 to the rescue – eh?

June 21, 2007

To me, IPv6 is one of the Internet’s real enigmas as the supposed replacement of the the Internet’s ubiquitous IPv4. We all know this has not happened.

The Internet Protocol (IPv4) is the principle protocol that lies behind the Internet and it originated before the Internet itself. In the late 1960s there was a need in a number of US universities to exchange data and an interest in developing the new network technologies, switching capabilities and protocols required to achieve this.

The result of this was the formation of the Advanced Research Project Agency a US government body who started developing a private network called ARPANET which metamorphosed into the Defense Advanced Research Projects Agency (DARPA). The initial contract to develop the network was won by Bolt, Beranek and Newman (BBN) which was eventually bought by Verizon and sold to two private equity companies in 2004 to be renamed BBN Technologies.

The early services required by the university consortium were file transfer, email and the ability to remotely log onto university computers. The first version of the protocol was called the Network Control Protocol (NCP) and saw the light of day in 1971.

In 1973, Vince Cerf, who worked on NCP (now Chief Internet Evangelist at Google), and Robert Kahn ( who previously worked on the Interface Message Processor [IMP]) kicked off a program to design a next generation networking protocol for the ARPANET. This activity resulted in the the standardisation through ARPANET Requests For Comments (RFCs) of TCP/IPv4 in 1981 (now IETF RFC 760).

IPv4 uses a 32-bit address structure which we see most commonly written in dot-decimal notation such as aaa.bbb.ccc.ddd representing a total of 4,294,967,296 unique addresses. Not all of these are available for public use as many addresses are reserved.

An excellent book that pragmatically and engagingly goes through the origins of the Internet in much detail is Where Wizards Stay Up Late – it’s well worth a read.

The perceived need for upgrading

The whole aim of the development of of IPv4 was to provide a schema to enable global computing by ensuring that computers could uniquely identify themselves through a common addressing scheme and are able to communicate in a standardised way.

No matter how you look at it, IPv4 must be one of the most successful standardisation efforts to have ever taken place if measured by its success and ubiquity today. Just how many servers, routers, switches, computers, phones, and fridges are there that contain an IPv4 protocol stack? I’m not too sure, but it’s certainly a big, big number!

In the early 1990s, as the Internet really started ‘taking off’ outside of university networks, it was generally thought that the IPv4 specification was beginning to run out of steam and would not be able to cope with the scale of the Internet as the visionaries foresaw. Although there were a number of deficiencies, the prime mover for a replacement to IPv4 came from the view that the address space of 32 bits was too restrictive and would completely run out within a few years. This was foreseen because it was envisioned, probably not wrongly, that nearly every future electronic device would need its own unique IP address and if this came to fruition the addressing space of IPv4 would be woefully inadequate.

Thus the IPv6 standardisation project was born. IPv6 packaged together a number of IPv4 enhancements that would enable the IP protocol to be serviceable for the 21st century.

Work was started 1992/3 and by 1996 a number of RFCs were released starting with RFC 2460. One of the most important RFCs to be released was RFC 1933 which specifically looked at the transition mechanisms of converting IPv4 networks to IPv6. This covered the ability of routers to run IPv4 and IPv6 stacks concurrently – “dual stack” – and the pragmatic ability to tunnel the IPv6 protocol over ‘legacy’ IPv4 based networks such as the Internet.

To quote RFC 1933:

This document specifies IPv4 compatibility mechanisms that can be implemented by IPv6 hosts and routers. These mechanisms include providing complete implementations of both versions of the Internet Protocol (IPv4 and IPv6), and tunnelling IPv6 packets over IPv4 routing infrastructures. They are designed to allow IPv6 nodes to maintain complete compatibility with IPv4, which should greatly simplify the deployment of IPv6 in the Internet, and facilitate the eventual transition of the entire Internet to IPv6.

The IPv6 specification contained a number of areas of enhancement:

Address space: Back in the early 1990s there was a great deal of concern about the lack of availability of public IP addresses. With the widespread uptake of IP rather than ATM as the basis of enterprise private networks as discussed in a previous post The demise of ATM, most enterprises had gone ahead and implemented their networks with any old IP address they cared to use. This didn’t matter at the time because those networks were not connected to the public Internet so it did’nt matter whether other computers or routers had selected the same addresses.

It first became a serious problem when two divisions of a company tried to interconnect within their private network and found that both divisions had selected the same default IP addresses and could not connect. This was further compounded when those companies wanted to connect to the Internet and found that their privately selected IP addresses could not be used in the public space as they had been allocated to other companies.

The answer to this problem was to increase the IP protocol addressing space to accommodate all the private networks coming onto the public network. Combined with the vision that every electronic device could contain an IP stack, IPv6 increased the address space to 128 bits rather than IPv4’s 32 bits.

Headers: Headers in IPv4 (headers precede data in the packet flow and contain routing and other information about the data) were already becoming unwieldy so the addition of extra data in the headers necessitated by IPv6 would not help things by increasing a minimum 20byte header to 80 bytes. IPv6 headers are simplified by enabling headers to be chained together and only used when needed. IPv4 has a total of 10 fields, while IPv6 has only 6 and no options.

Configuration: Managing an IP network is pretty much of a manual exercise with few tools to automate the activity beyond tools such as DCHP (the automatic allocation of IP addresses for computers). Network administrators seem to spend most of the day manually entering IP addresses into fields in network management interfaces which really does not make much use of their skills.

IPv6 has incorporated enhancements to enable a ‘fully automatic’ mode where the protocol can assign an address to itself without human intervention. The IPv6 protocol will send out a request to enquire whether any other device has the same address. If it receives a positive reply it will add a random offset and ask again until it receives no rely. IPv6 can also identify nearby routers and automatically identify if a local DHCP server ID available.

Quality of Service: IPv6 has embedded enhancements to enable the prioritisation of certain classes of traffic by assigning a value to a packet in the field labelled Drop Priority.

Security: IPv6 incorporates IP-Sec to provide authentication and encryption to improve the security of packet transmission and is handled by the Encapsulating Security Payload (ESP).

Multicast: Multicast addresses are group addresses so that packets can be sent to a group rather than an individual. IPv4 handles this very inefficiently while IPv6 has implemented the concept of a multicast address into its core.

So why aren’t we all using IPv6?

The short answer to this question is that IPv4 is a victim of its own success. The task of migrating the Internet to IPv6, even taking into to account the available migration options of dual stack hosting and tunnelling, is just too challenging.

As we all know, the Internet is made up of thousands of independently managed networks each looking to commercially thrive or often just to survive. There is no body overseeing how the Internet is run except for specific technical aspects such as Domain Name Server (DNS) management or the standards body, IETF. (Picture credit: The logo of Linux IPv6 Development Project)

No matter how much individual evangelists push for the upgrade, getting the world to do so is pretty much an impossible task unless everyone sees that there is a distinct commercial and technical benefit for them to do so.

This is the core issue and as the benefits of upgrading to IPv6 have been seriously eroded by the advent of other standards efforts that address each of the IPv6 enhancements on a stand-alone basis. The two principle are NAT and MPLS.

Network address translation (NAT): To overcome the limitation in the number of available public addresses, NAT was implemented. This means that many users / computers in a private network are able to access the public Internet using a single public IP address. Each user is assigned a transient dynamic session IP address when they access the Internet and the NAT software manages the translation between the the public IP address and the dynamic address used within the private network.

NAT effectively addressed the concern that the Internet may run out of address space. It could be argued that NAT is just a short term solution that came at a big cost to users. The principle downside is that external connections are unable to set up long term relationships with an individual user or computer that is behind a NAT wall as they have not been assigned their own unique IP address. Users of the internal dynamically assigned IP addresses can change at any time.

This particularly affects applications that contain addresses so that traffic can always be sent to a specific individual or computer – VoIP is probably the main victim.

It’s interesting to note that the capability to uniquely identify individual computers was the main principle behind the development of IPv4 so it quite easy to see why there is often strong views expressed about NAT!

MPLS and related QoS standards: The advent of MPLS covered in The rise and maturity of MPLS and MPLS and the limitations of the Internet addressed many of the needs of the IP community to be able to address Quality of Service issues by separating high-priority service traffic from low-priority traffic.

Round up

Don’t break what works. IP networks take a considerable amount of skill and hard work to keep alive. They always seem to be ‘living on the edge’ and break down when a network administrator gets distracted. Leave well alone is the mantra by many operational groups.

The benefits of upgrading to IPv6 have been considerably eroded by the advent of NAT and MPLS. Combine this with the lack of an overall management body who could force through a universal upgrade and the innate inertia of carriers and ISPs probably means that IPv6 will never achieve such a dominant position as its progenitor IPv4.

According to one overview of IPv6, which gets to the heart of the subject, “Although IPv6 is taking its sweet time to conquer the world, it’s now showing up in more and more places, so you may actually run into it one of these days.”

This is not to say that IPv6 is dead, rather it is being marginalised by only being run in closed networks (albeit some rather large networks). There is real benefit to the Internet being upgraded to IPv6 as every individual and every device connected to it could be assigned its own unique address as envisioned by the Founders of the Internet. The inability to do this severely constrains services and applications which are not able to clearly identify an individual on an on-going basis as is inherent in a telephone number. This clearly reflects badly on the Internet.

IPv6 is a victim of the success of the Internet and the ubiquity of IPv4 and will probably never replace IPv4 in the Internet in the foreseeable future (Maybe I should never say never!). I was once asked by a Cisco Fellow how IPv6 could be rolled out, after shrugging my shoulders and laughing I suggested that it needed a Bill Gates of the Internet to force through the change. That suggestion did not go down too well. Funnily enough, now that IPv6 is incorporated into Vista we could see the day when this happens. The only fly in the ointment is that Vista has the same problems and challenges as IPv6 in replacing XP – users are finally tiring of never-ending upgrades with little practical benefit.

Interesting times.

Advertisements

sip, Sip, SIP – Gulp!

May 22, 2007

Session Initiation Protocol or ‘SIP’ as it is known has become a major signalling protocol in the IP world as it lies at the heart of Voice-over-IP (VoIP). It’s a term you can hardly miss as it is supported by every vender of phones on the planet (Picture credit: Avaya: An Avaya SIP phone).

Many open software groups have taken SIP to the heart of their initiatives and an example of this is IP Multimedia Subsystem (IMS) which I recently touched upon in IP Multimedia Subsystem or bust!

SIP is a real-time IP applications layer protocol that sits alongside HTTP, FTP, RTP and other well known protocols used to move data through the Internet. However it is an extremely important one because it enables SIP devices to discover, negotiate, connect and establish communication sessions with other SIP enabled devices.

SIP was co-authored in 1996 by Jonathan Rosenberg who is now a Cisco Fellow, Henning Schulzrinne who is Professor and Chair in the Dept. of Computer Science at Columbia University and Mark Handley who is Professor of Networked Systems at UCL. SIP became an IETF SIP Working Group which is still supporting the RFC 3261 standard. SIP was originally used on the US experimental Multicast network commonly known as Mbone. This makes SIP an IT /IP standard rather than one developed by the communications industry.

Prior to SIP, voice signalling protocols were essentially proprietary signalling protocols aimed at use by the big telecommunications companies on their big Public Switched Telecommunications Networks (PSTN) voice networks such as SS7 (C7 in the UK). With the advent of the Internet and the ‘invention’ of Voice over IP, it soon became clear that a new signalling protocol was required that was peer-to-peer, scalable, open, extensible, lightweight and simple in operation that could be used on a whole new generation of real-time communications devices and services that are running over the Internet.

SIP itself is based on earlier IETF / Internet standards, principally Hypertext Transport Protocol (HTTP) which is the core protocol behind the World Wide Web.

Key features of SIP

The SIP signalling standard has many key features:

Communications device identification: SIP supports a concept known as Address of Record (AOR) which represents a user’s unique address in the world of SIP communications. An example of an AOR is sip: xxx@yyy.com. To enable a user to have multiple communications devices or services, SIP has a mechanism called a Uniform resource Identifier (URI). A URI is like the Uniform Resource Locator (URL) used to identify servers on the world wide web. URIs can be used to specify the destination device of a real-time session e.g.

  • IM: sip: xxx@yyy.com (Windows Messenger uses SIP)
  • Phone: sip: 1234 1234 1234@yyy.com; user=phone
  • FAX: sip: 1234 1234 1235@yyy.com; user=fax

A SIP URI can use both traditional PSTN numbering schemes AND alphabetic schemes as used on the Internet.

Focussed function: SIP only manages the set up and tear down of real time communication sessions, it does not manage the actual transport of media data. Other protocols undertake this task.

Presence support: SIP is used in a variety of applications but has found a strong home in applications such as VoIP and Instant Messaging (IM). What makes SIP interesting is that it is not only capable of setting up and tearing down real time communications sessions but also supports and tracks a user’s availability through the Presence capability. The open presence standard Jabber uses SIP. I wrote about presence in – The magic of ‘presence’.

Presence is supported through a key SIP extension: SIP for Instant messaging and Presence Leveraging Extensions (SIMPLE) [a really contrived acronym!]. This allows a user to state their status as seen in most of the common IM systems. AOL Instant Messenger is shown in the picture on the left.

SIMPLE means that the concept of Presence can be used transparently on other communications devices such as mobile phones, SIP phones, email clients and PBX systems.

User preference: SIP user preference functionality enables a user to control how a call is handled in accordance to their preferences. For example:

  • Time of day: A user can take all calls during office hours but direct them to a voice mail box in the evenings.
  • Buddy lists: Give priority to certain individuals according to a status associated with each contact in an address book.
  • Multi-device management: Determine which device / service is used to respond to a call from particular individuals.

PSTN mapping: SIP can manage the translation or mapping of conventional PSTN numbers to SIP URIs and vice versa. This capability allows SIP sessions to transparently inter-work with the PSTN. There are organisations, such as ENUM, who provide appropriate database capabilities. To quote ENUM’s home page:

“ENUM unifies traditional telephony and next-generation IP networks, and provides a critical framework for mapping and processing diverse network addresses. It transforms the telephone number—the most basic and commonly-used communications address—into a universal identifier that can be used across many different devices and applications (voice, fax, mobile, email, text messaging, location-based services and the Internet).”

SIP trunking: SIP trunks enable enterprises to group inter-site calls using a pure IP network. This could use an IP-VPN over an MPLS-based network with a guaranteed Quality of Service. Using SIP trunks could lead to significant cost saving when compared to using traditional E1 or T1 leased lines.

Inter-island communications: In a recent post, Islands of communication or isolation? I wrote about the challenges of communication between islands of standards or users. The adoption of SIP-based services could enable a degree of integration with other companies to extend the reach of what, to date, have been internal services.

Of course, the partner companies need to have adopted SIP as well and have appropriate security measures in place. This is where the challenge would lay in achieving this level of open communications! (Picture credit: Zultys: a Wi-Fi SIP phone)

SIP servers

SIP servers are the centralised capability that manage establishment of communications sessions by users. Although there are many types of server, they are essentially only software processes and could be run on a single processor or device. There are several types of SIP server:

Registrar Server: The registrar server authenticates and registers users as soon as they come on-line. It stores identities and the list of devices in use by each user.

Location Server: The location server keeps track of users’ locations as they roam and provides this data to other SIP servers as required.

Redirect Server: When users are roaming, the Redirect Server maps session requests to a server closer to the user or an alternate device.

Proxy Server: SIP Proxy servers pass on SIP requests that are located either downstream or upstream.

Presence Server: SIP presence servers enable users to provide their status (presentities) to other users who would like to see it (Watchers).

Call setup Flow

The diagram below shows the initiation of a call from the PSTN network (section A), connection (section B) and disconnect (section C). The flow is quite easy to understand. One of the downsides is that if a complex session is being set up it’s quite easy to get up to 40 to 50+ separate transactions which could lead to unacceptable set-up times being experienced – especially if the SIP session is being negotiated across the best-effort Internet.

(Picture source: NMS Communications)

Round-up

As a standard SIP has had a profound impact on our daily lives and lives well along those other protocol acronyms that have fallen into the daily vernacular such as IP, HTTP, www and TCP. Protocols that operate at the application level seem to be so much more relevant to our daily lives than those that are buried in the network such as MPLS and ATM.

There is still much to achieve by building capability on top of SIP such as federated services and more importantly interoperability. Bodies working on interoperability are SIPcenter, SIP Forum, SIPfoundry, SIP’it and IETF’s SPEERMINT working group. More fundamental areas under evaluation are authentication and billing.

More depth information about SIP can be found at http://www.tech-invite.com, a portal devoted to SIP and surrounding technologies.

Next time you just buy a SIP Wi-Fi phone from your local shop, install it, find that it works first time AND saves you money, just think about all the work that has gone into creating this software wonder. Sometimes, standards and open software hit a home run. SIP is just that.

Adendum #1:Do you know your ENUM?


IP Multimedia Subsystem or bust!

May 10, 2007

I have never felt so uncomfortable about writing about a subject as I am now while contemplating IP Multimedia Subsystem (IMS). Why this should be I’m not quite sure.

Maybe it’s because one of the thoughts it triggers is the subject of Intelligent Networks (IN) that I wrote about many years ago – The Magic of Intelligent Networks. I wrote at the time:

“Looking at Intelligent Networks from an Information Technology (IT) perspective can simplify the understanding of IN concepts. Telecommunications standards bodies such as CCITT and ETSI have created a lot of acronyms which can sometimes obfuscate what in reality is straightforward.”

This was an initiative to bring computers and software to the world voice switches that would enable carriers to develop advanced consumer services on their voice switches and SS7 signalling networks. To quote an old article:

“Because IN systems can interface seamlessly between the worlds of information technology and telecommunications equipment, they open the door to a wide range of new, value added services which can be sold as add-ons to basic voice service. Many operators are already offering a wide range of IN-based services such as non-geographic numbers (for example, freephone services) and switch-based features like call barring, call forwarding, caller ID, and complex call re-routing that redirects calls to user-defined locations.”

Now there was absolutely nothing wrong with that vision and the core technology was relatively straightforward (database lookup number translation). The problem in my eyes was that it was presented as a grand take-over-the-world strategy and a be-all-and-and-all vision when in reality it was a relatively simple idea. I wouldn’t say IN died a death, it just fizzled out. It didn’t really disappear as such, as most of the IN related concepts became reality over time as computing and telephony started to merge. I would say it morphed into IP telephony.

Moreover, what lay at the heart of IN was the view that intelligence should be based in the network, not in applications or customer equipment. The argument about dumb networks versus Intelligent networks goes right back to the early 1990s and is still raging today – well at least simmering.

Put bluntly, carriers laudably want intelligence to be based in the network so they are able to provide, manage and control applications and derive revenue that will compensate for plummeting Plain Old Telephony Services (POTS) services. Whereas most IT and Internet people do not share this vision as they believe it holds back service innovation which generally comes from small companies. There is a certain amount of truth in this view as there are clear examples of where this is happening today if we look at the fixed and mobile industries.

Maybe I feel uncomfortable with the concept of IMS as it looks like the grandchild of IN. It certainly seems to suffer from the same strengths and weaknesses that affected its progenitor. Or, maybe it’s because I do not understand it well enough?

What is IP Multimedia Subsystem (IMS)?

IMS is an architectural framework or reference architecture – not a standard – that provides a common method for IP multiple media ( I prefer this term to multimedia) services to be delivered over existing terrestrial or wireless networks. In the IT world – and the communications world come to that – a good part of this activity could be encompassed using the term middleware. Middleware is an interface (abstraction) layer that sits between the networks and applications / services that provides a common Application Programming Interface (API).

The commercial justification of IMS is to enable the development of advanced multimedia applications whose revenue would compensate for dropping telephony revenues and the reduce customer churn.

The technical vision of IMS is about delivering seamless services where customers are able to access any type of service, from any device they want to use, with single sign-on, with common contacts and fluidity between wire line and wireless services. IMS has ambitions about delivering:

  • Common user interfaces for any service
  • Open application server architecture to enable a ‘rich’ service set
  • Separate user data from services for cross service access
  • Standardised session control
  • Inherent service mobility
  • Network independence
  • Inter-working with legacy IN applications

One of the comments I came across on the Internet from a major telecomms equipment vendor was that IMS was about the “Need to create better end-user experience than free-riding Skype, Ebay, Vonage, etc.”. This, in my opinion, is an ambition too far as innovative services such as those mentioned generally do not come out of the carrier world.

Traditionally each application or service offered by carriers sit alone in their own silos calling on all the resources they need, using proprietary signalling protocols, and running in complete isolation to other services each of which sit in their own silo. In many ways this reflects the same situation that provided the motivation to develop a common control plane for data services called GMPLS. Vertical service silos will be replaced with horizontal service, control and transport layers.


Removal of service silos
Source: Business Communications Review, May 2006

As with GMPLS, most large equipment vendors are committed to IMS and supply IMS compliant products. As stated in the above article:

“Many vendors and carriers now tout IMS as the single most significant technology change of the decade… IMS promises to accelerate convergence in many dimensions (technical, business-model, vendor and access network) and make “anything over IP and IP over everything” a reality.

Maybe a more realistic view is that IMS is just an upgrade to the softswitch VoIP architecture outlined in the 90s – albeit being a trifle more complex. This is the view of Bob Bellman, in an article entitled From Softswitching To IMS: Are We There Yet? Many of the  core elements of a softswitch architecture are to be found in the IMS architecture including the separation of the control and data planes.

VoIP SoftSwitch Architecture
Source: Business Communications Review, April 2006

Another associated reference architecture that is aligned with IMS and is being popularly pushed by software and equipment vendors in the enterprise world is Service Oriented Architecture (SOA) an architecture that focuses on services as the core design principle.

IMS has been developed by an industry consortium and originated in the mobile world in an attempt to define an infrastructure that could be used to standardise the delivery of new UMTS or 3G services. The original work was driven by 3GPP2 and TISPAN. Nowadays, just about every standards body seems to be involved including Open Mobile Alliance, ANSI, ITU, IETF, Parlay Group and Liberty Alliance – fourteen in total.

Like all new initiatives, IMS has developed its own mega-set of of T/F/FLAs (Three, four and five letter acronyms) which makes getting to grips with the architectural elements hard going without a glossary. I won’t go into this much here as there are much better Internet resources available: The reference architecture focuses on a three layer model:

#1 Applications layer:

The application layer contains Application Servers (AS) which host each individual service. Each AS communicated to the control plane using Session Initiation Protocol (SIP).  Like GSM, an AS can interrogate a database of users to check authorisation. The database is called the Home Subscriber Server (HSS) or an HSS in a 3rd party network if the user is roaming 9In GSM this is called the Home Location Register (HLR).

(Source: Lucent Technologies)

The application layer also contains Media Servers for storing and playing announcements and other generic applications not delivered by individual ASs, such as media conversion.

Breakout Gateways provide routing information based on telephone number looks-ups for services accessing a PSTN. This is similar functionality to that was found in IN systems discussed earlier.

PSTN gateways are used to interface to PSTN networks and include signalling and media gateways.

#2 Control layer:

The control plane hosts the HSS which is the master database of user identities and the individual calls or service sessions currently being used by each user. There are several roles that a SIP call / session controller can undertake:

  • P-CSCF (Proxy-CSCF) This provides similar functionality as a proxy server in an Intranet
  • S-CSCF (Serving-CSCF) This is the core SIP server always located in the home node
  • I-CSCF (Interrogating-CSCF) This is a SIP server located at a network’s edge and it’s address can be found in DNS servers by 3rd party SIP servers.

#3 Transport layer:

IMS encompasses any services that uses IP / MPLS as transport and pretty much all of the fixed and mobile access technologies including ADSL, cable modem DOCSIS, Ethernet, Wi-Fi, WIMAX and CDMA wireless. It has little choice in this matter as if IMS is to be used it needs to incorporate all of the currently deployed access technologies. Interestingly, as we saw in the DOCSIS post – The tale of DOCSIS and cable operators, IMS is also focusing on the of IPv6 with IPv4 ‘only’ being supported in the near term.

Roundup

IMS represents a tremendous amount of work spread over six years and uses as many existing standards as possible such as SIP and Parlay. IMS is work in progress and much still needs to be done – security and seamless inter-working of services are but two.

All the major telecommunications software, middleware and integrators are involved and just thinking about the scale of the task needed to put in place common control for a whole raft of services makes me wonder about just how practical the implementation of IMS actually is? Don’t take me wrong, I am a real supporter of the these initiatives because it is hard to come up with an alternative vision that makes sense, but boy I’m glad that I’m not in charge of a carrier IMS project!

The upsides of using IMS in the long term are pretty clear and focus around lowering costs, quicker time to market, integration of services and, hopefully, single log-in.

It’s some of the downsides that particularly concern me:

  • Non-migration of existing services: Like we saw in the early days of 3G, there are many services that would need to come under the umbrella of an IMS infrastructure such as instant conferencing, messaging, gaming, personal information management, presence, location based services, IP Centrex, voice self-service, IPTV, VoIP and many more. But, in reality, how do you commercially justify migrating existing services in the short term onto a brand new infrastructure – especially when that infrastructure is based on a non-completed reference architecture?

    IMS is a long term project that will be redefined many times as technology changes over the years. It is clearly an architecture that represents a vision for the future that can be used to guide and converge new developments but it will many years before carriers are running seamless IMS based services – if they ever will.

  • Single vendor lock-in: As with all complicated software systems, most IMS implementations will be dominated by a single equipment supplier or integrator. “Because vendors won’t cut up the IMS architecture the same way, multi-vendor solutions won’t happen, Moreover, that single supplier is likely to be an incumbent vendor.” This was quoted by Keith Nissen from InStat in a BCR article.
  • No launch delays: No product manager would delay the launch of a new service on the promise of jam tomorrow. While the IMS architecture is incomplete, services will continue to be rolled out without IMS further inflaming the Non-migration of existing services issue raised above.
  • Too ambitious: Is the vision of IMS just too ambitious? Integration of nearly every aspect of service delivery will be a challenge and a half for any carrier to undertake. It could be argued that while IT staff are internally focused getting IMS integration sorted they should be working on externally focused services. Without these services, customers will churn no matter how elegant a carrier’s internal architecture may be. Is IMS, Intelligent Networks reborn to suffer the same fate?
  • OSS integration: Any IMS system will need to integrate with carrier’s often proprietary OSS systems. This compounds the challenge of implementing even a limited IMS trial.
  • Source of innovation: It is often said that carriers are not the breeding ground of new, innovative services. This lies with small companies on the Internet creating Web 2.0 services that utilise such technologies as presence, VoIP and AJAX today. Will any of these companies care whether a carrier has an IMS infrastructure in place?
  • Closed shops – another walled garden?: How easy will it be for external companies to come up with a good idea for a new service and be able to integrate with a particular carrier’s semi-proprietary IMS infrastructure?
  • Money sink: Large integration projects like IMS often develop a life of their own once started and can often absorb vast amounts of money that could be better spent elsewhere.

I said at the beginning of the post that I felt uncomfortable about writing about IMS and now that I’m finished I am even more uncomfortable. I like the vision – how could I not? It’s just that I have to question how useful it will be at the end of the day and does it divert effort, money and limited resource away from where they should be applied – on creating interesting services and gaining market share. Only time will tell.

Addendum:  In a previous post, I wrote about the IETF’s Path Computation Element Working Group and it was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?


The tale of DOCSIS and cable operators

May 2, 2007

When anyone that uses the Internet on a regular basis is presented with an opportunity to upgrade their access speeds they will usually jump at the opportunity without a second thought. There used to be a similar analogy with personal computers with operating systems and processor speeds, but this is a less common trend these days as the benefits to be gained are often ephemeral as we have recently seen with Microsoft’s Vista. (Picture: SWINOG)

However, the advertising headline for many ISPs still focuses on “XX Mbit/s for as little as YY Pounds/month”. Personally, in recent years, I have not seen too many benefits in increasing my Internet access speed because I see little improvement when browsing normal WWW sites as their performance are not now bottlenecked by my access connection but rather the performance of servers. My motivation to get more bandwidth into my home is the need to have sufficient bandwidth – both upstream and downstream – to support my family’s need to use multiple video and audio services at the same time. Yes, we are as dysfunctional as everyone else with computers in nearly every room of the house and everyone wanting to do their own video or interactive thing.

I recently posted an overview of my experience of Joost, the new ‘global’ television channel recently launched by Skype founders, Niklas Zennstrom and Janus Friis – Joost’s beta – first impressions and it’s interesting to note that as a peer-to-peer system it does require significant chunks of your access bandwidth as discussed in Joost: analysis of a bandwidth hog.

The author’s analysis shows that it “pulls around 700 kbps off the internet and onto your screen” and “sends a lot of that data on to other users – about 220 kbps upstream”. If Joost is a window on the future of the IPTV on the Internet, then its should be of concern to the ISP and carrier communities and it should also be of concern to each of us that uses it. 220kbits/s is a good chunk of of the 250kbit/s upstream capability of ADSL-based broadband connections. If the upstream channel is clogged, response time on all services being accessed will be affected. Even more so if several individuals are are access Joost of a single broadband connection.

It’s these issues that make me want to upgrade my bandwidth and think about the technology that I could use to access the Internet. In this space there has been an on-going battle for many years between twisted copper pair ADSL or VDSL used by incumbent carriers and cable technology used by competitive cable companies such as Virgin Media to deliver Internet to your home.

Cable TV networks (CATV) have come a long way since the 60s when they were based on simple analogue video distribution over coaxial cable. These days they are capable of delivering multiple services and are highly interactive allowing in-band user control of content unlike satellite delivery that requires a PSTN based back-channel. The technical standard that enables these services is developed by CableLabs and is called Data Over Cable Service Interface Specification (DOCSIS). This defines the interface requirements for cable modems involved in high-speed data distribution over cable television system networks.

The graph below shows the split between ADSL and Cable based broadband subscribers: (Source: Virgin Media) with Cable trailing ADSL to a degree. The link provided provides an excellent overview of the UK broadband market in 2006 so I won’t comment further here.

A DOCSIS based broadband cable system is able to deliver a mixture of MPEG-based video content mixed with IP enabling the provision of a converged service as required in 21st century homes. Cable systems operate in a parallel universe, well not quite, but they do run a parallel spectrum enclosed within their cable network isolated from the open spectrum used by terrestrial broadcasters. This means that they are able to change standards when required without the need to consider other spectrum users as happens with broadcast services.

The diagram below shows how the spectrum is split between upstream and downstream data flows (Picture: SWINOG) and various standards specify the data modulation (QAM) and bit-rate standards. As is usual in these matters, there are differences between the USA and European standards due to differing frequency allocations and standards – NTSC in the USA and PAL in Europe. Data is usually limited to between 760 and 860MHz.

The DOCSIS standard has been developed by CableLabs and the ITU with input from a multiplicity of companies. The customer premises equipment is called a Cable Modem and the Central Office (Head End) equipment is called the a cable modem termination system (CMTS).

Since 1997there have been various releases (Source: CableLabs) of the DOCSIS standard with the most recent being version 3.0 being released in 2006.

DOCSIS 1.0 (Mar. 1997) (High Speed Internet Access) Downstream: 42.88 Mbit/s and Upstream: 10.24 Mbit/s

  • Modem price has declined from $300 in 1998 to <$30 in 2004

DOCSIS 1.1 (Apr. 1999) (Voice, Gaming, Streaming)

  • Interoperable and backwards-compatible with DOCSIS 1.0
  • “Quality of Service”
  • Service Security: CM authentication and secure software download
  • Operations tools for managing bandwidth service tiers

    DOCSIS 2.0 (Dec. 2001) (Capacity for Symmetric Services) Downstream: 42.88 Mbit/s and Upstream:30.72 Mbit/s

    • Interoperable and backwards compatible with DOCSIS 1.0 / 1.1
    • More upstream capacity for symmetrical service support
    • Improved robustness against interference (A-TDMA and S-CDMA)

    DOCSIS 3.0 (Aug. ’06) Downstream: 160 Mbit/s and Upstream: 120 Mbit/s

    • Wideband services provided by expanding used bandwidth through the use of channel bonding e.g. instead of a single data channel being delivered over a single channel, they are multiplexed over a number of channels. ( A previous post talked about bonding in the ADSL world Sharedband: not enough bandwidth? )
    • Support of IPv6

    Roundup

    With the release of the DOCSIS 3.0 standard it looks like cable companies around the world are now set to be able to upgrade the bandwidth they will be able to offer to their customers in coming years. However, this will be an expensive upgrade for them to undertake with the need to upgrade head end equipment first and then followed by field cable modem upgrades over time. I would hazard a guess that it will be at least five years before the average cable user will be able to see the benefits.

    I also wonder about what price will need to be paid for the benefit of gaining higher bandwidth through channel bonding when there is limited spectrum available for data services on the cable system. A limit in subscriber number scalability?

    I was also interested to read about the possible adoption of IPv6 in DOCSIS 3.0. It was clear to me many years ago that IPv6 would ‘never’ (never say never!) on the Internet because of the scale of the task. It’s best chance would be in closed systems such as satellite access services and IPTV systems. Maybe, cable systems are an another option. I will catch up on IPv6 in a future post.


    Aria Networks shows the optimal path

    April 26, 2007

    In a couple of previous posts, Path Computation Element (PCE): IETF’s hidden jewel and MPLS-TE and network traffic engineering I mentioned a company called Aria Networks who are working in the technology space discussed in those posts. I would like to take this opportunity to write a little about them.

    Aria Networks are a small UK company that have been going for around eighteen months. The company is commercially led by Tony Fallows, the core technology has been developed by Dr Jay Perrett, Chief Science Officer and Head of R&D and Daniel King, Chief Operating Officer and their CTO is Adrian Farrel. Adrian currently co-chairs the IETF Common Control and Measurement Plane (CCAMP) working group that is responsible for GMPLS and also co-chairs of the IETF Path Computation Element (PCE) working groups.

    The team at Aria have brought some very innovative software technology the products they supply to network operators and the network equipment vendors. Their raison d’etre, as articulated by Daniel King, is to “to fundamentally change the way complex, converged networks are designed, planned and operated”. This is an ambitious goal, so let’s take a look at how Aria plan to achieve this.

    Aria currently supplies software that addresses the complex task of computing packet constrain-based paths across an IP or an MPLS network and optimising that network holistically and in parallel. Holistic is a key word in respect of understanding Aria products. It means that when an additional path needs to be computed in a network, the whole network and all the services that are running over it are recalculated and optimised in a single calculation. A simple example of why this is so important is shown here.

    This ability to compute holistically rather than on a piecemeal basis requires some very slick software as it is a very computationally intensive (‘hard’) computation that could easily take many hours using other systems. Parallel is the other key word. When an additional link is added to a network there could be a knock-on effect to any other link in the network, therefore re-computing all the paths in parallel – both existing and new – is the only way to ensure a reliable and optimal result is achieved.

    Traffic engineering of IP, MPLS or Ethernet networks could quite easily be dismissed by the non-technical management of a network operator as an arcane activity but, as anyone with experience of operating networks can vouch, good traffic engineering brings pronounced benefits that directly affects the reduction of costs while increasing the positive aspects of customers’ experience of using services. Of course, lack of appropriate traffic engineering activity has the opposite effect. Only one thing could be put above traffic engineering to better achieve a good brand image and that is good customer service. The irony is that if money is not spent on good traffic engineering, ten times the amount would need to be spent on call centre facilities papering over the cracks!

    One quite common view held by a number of engineers is that traffic engineering is not required because they say “we throw bandwidth at our network”. If a network has an abundance of bandwidth then in theory there will never be any delays caused by an inadvertent overload of a particular link. This may be true, but it is certainly an expensive and short sighted solution and one that could turn out to risky as new customers come on board. Combine it with the often slow provisioning times often associated with adding additional optical links can cause major network problems. The challenge of planning and optimising if significantly increased in a Next Generation Network (NGN) when traffic is actively segmented into different traffic classes such as real time VoIP and best-effort Internet access. Traffic engineering tools will become an even more indispensable tool than they have in the past.

    It’s interesting to note that even if a protocol like MPLS-TE, PBT, PBB-TE or T-MPLS has all the traffic bells and whistles any carrier may ever desire, it does not mean they can be used. Using TE extensions such as Fast ReRoute (FRR) need sophisticated tools or they quickly become unmanageable in a real network.

    Aria’s product family is called intelligent Virtual Network Topologies (iVNT). Current products are aimed at network operators that operate IP and / or MPLS-TE based networks.

    iVNT MPLS-TE enables network operators design, model and optimise MPLS – Traffic Engineered (MPLS-TE) networks that use constraint based point-to-point Labelled Switched Paths (LSPs), constraint based point-to-multipoint LSPs, Fast-Reroute (FRR) bypass tunnels. One of its real strengths is that it goes to town on supporting any type of constraint that could be placed on a link – delay, hop count, cost, required bandwidth, link-layer protection, path disjointedness, bi-directionality, etc. Indeed, it quite straight forward to add any addition constraints that an individual carrier may need.

    iVNT IP enables network operators to design, model and optimise IP and Label Distribution Protocol (LDP) networks based on the metrics used in Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System (IS-IS) Interior Gateway Protocols (IGPs) to ensure traffic flows are correctly balanced across the network. Although not using more advanced traffic engineering capabilities is clearly not the way to go in the future, many carriers still stick with ‘simple’ IP solutions – however they are far from simple in practise and can be an operational nightmare to manage.

    What makes Aria software so interesting?

    To cut to the chase, it’s the use of Artificial Intelligence (AI) when applied to path computation and network optimisation. Conventional algorithms used in network optimisation software are linear in nature and are usually deterministic in that they produce the same answer for the same set of variables every time they are run. They are usually ‘tuned’ to a single service type and are often very slow to produce results when faced with a very large network that uses many paths and carries many services. Aria’s software approach may produce different results but correct, results each time it is run and is able to handle multiple services that are inherently and significant different from a topology perspective e.g. point-to-point (P2P), Point -to-Multipoint (P2MP) based services and mesh-like IP-VPNs etc.

    Aria uses evolutionary and genetic techniques which are good at learning new problems and running multiple algorithms in parallel. The software then selects which algorithm is the better at solving the particular problem they are challenged with. The model evolves multiple times and quickly converges on the optimal solution. Importantly, the technology is very amenable to use parallel computing to speed up processing of complex problems such as required in holistic network optimisation.

    It generally does not make sense to use the same algorithm to solve all the network path optimisation needs of different services – iVNT runs many in parallel and self selection will show which is the most optimal for the current problem.

    Aria’s core technology is called DANI (Distributed Artificial Neural Intelligence) and is a “flexible, stable, proven, scalable and distributed computation platform”. DANI was developed by two of Aria’s Founders, Jay Perrett and Daniel King and has had a long proving ground in the pharmaceutical industry for pre-clinical drug discovery which needs the analysis of millions of individual pieces of data to isolate interesting combinations. The company that addresses the pharmaceutical industry is Applied Insilico.

    Because of the use of AI, iVNT is able to compute a solution for a complex network containing thousands of different constraint-based links, hundreds of nodes and multiple services such as P2P LSPs, Fast Reroute (FRR) links, P2MP links (IPTV broadcast) and meshed IP-VPN services in just a few minutes on one of today’s norebooks.

    What’s the future direction for Aria’s products?

    Step to multi-layer path computation: As discussed in the posts mentioned above, Aria is very firmly supportive of the need to provide automatic multi-layer path computation. This means that the addition of of a new customer’s IP service will be passed as a bandwidth demand the MPLS network and downwards to the GMPLS controlled ASTN optical network as discussed in GMPLS and common control.

    Path Computation Element (PCE): Aria are at the heart of the development of on-line path computation so if this is a subject of interest to you then give Aria a call.

    Two product variants address this opportunity:

    iVNT Inside is aimed at Network Management System (NMS) vendors, Operational Support System (OSS) vendors and Path Computation Element (PCE) vendors that have a need to provide advanced path computation capabilities embedded in their products.

    iVNT Element is for network equipment vendors that have a need to embed advanced path computation capabilities in their IP/MPLS routers or optical switches.

    Roundup

    Aria Networks could be considered to be a rare company in the world of start-ups. It has a well tried technology whose inherent characteristics are admirably matched to the markets and the technical problems it is addressing. Its management team are actively involved in developing the standards that their products are, or will be, able to support. This provides no better basis to get their products right.

    It is early days for carriers turning in their droves to NGNs and it is even earlier days for them to adopt on-line PCE in their networks, but Aria’s timing is on the nose as most carriers are actively thinking about these issues and are actively looking for tools today.

    Aria could be well positioned to benefit from the explosion of NGN convergence as it seems – to me at least – that fully converged networks will very challenging to design, optimise and operate without the new approach and tools from companies such as Aria.

    Note: I need to declare an interest as I worked with them for a short time in 2006.


    PONs are anything but passive

    April 18, 2007

    Passive Optical Networks (PONs) are an enigma to me in many ways. On one hand the concept goes back to late 1980s and has been floating around ever since with obligatory presentations from the large vendors whenever you visited them. Yes, for sure there are Pacific Rim countries and the odd state or incumbent carrier in the western world deploying the technology, but they never seemed to impact my part of the world.On the other hand, the technology would provide virtual Internet nirvana for me at home with 100Mbit/s second available to support video on demand to each member of my family who, in 21st century fashion, have computers in virtually every home of the house! This high bandwidth still seems as far away as ever with an average speed of 4.5Mbit/s downstream bandwidth in the UK. I see 7Mbit/s as I am close to a BT exchange. We are still struggling to deliver 21st century data services over 19th century copper wires using ATM-based Digital Subscriber Line (DSL) technology. If you are in the right part of the country, you get marginally higher rates from your cable company. Why are carriers not installing optical fibre to every home?

    To cut to the chase, it’s because of the immense costs of deploying it. Fibre to the Home (FTTH) as it is known, requires the installation of a completely new optical fibre infrastructure between the carrier’s exchanges and homes. Such an initiative would almost require a government led and paid for initiative to make it worthwhile – which of course is what has happened in the far east. Here in the UK this is further clouded by the existing cable industry which has struggled to reach profitability based on massive investments in infrastructure during the 90s.

    What are Passive Optical Networks (PONs)?

    The key word is passive. In standard optical transmission equipment, used in the core of public voice and data networks, all of the data being transported is switched using electrical or optical switches. This means that investment needs to be made in the network equipment to undertake that switching and that is expensive. In a PON, instead of electrical equipment joining or splitting optical fibres, fibres are just welded together at minimum cost – just like T-junctions in domestic plumbing. Light travelling down the fibre then splits or joins when it hits a splice. Equipment in the carrier’s exchange (or Central Office [CO]), and the customer’s home then multiplex or de-multiplex an individual customer’s data stream.

    Although the use of PONs considerably reduces equipment costs as no switching equipment is required in the field and hence no electrical power feeds are required, it is still an extremely expansive technology to deploy making it very difficult to create a business case that stacks up. A major problem is that there is often no free space available in existing ducts pushing carriers to a new digs. Digging up roads and laying the fibre is a costly activity. I’m not sure what the actual costs are these days, but £50 per metre dug used to be the cost many years ago.

    As seems to be the norm in most areas of technology, there are two PON standards slugging it out in the market place with a raft of evangelists attached to both camps. The first is Gigabit PON (GPON) and the second is Ethernet PON (EPON).

    About Gigabit PON (GPON)

    The concept of PONs goes back to the early 1990s to the time when the carrier world was focussed on a vision of ATM being the world’s standard packet or cell based WAN and LAN transmission technology. This never really happened as I discussed in The demise of ATM but ATM lives on in other services defined around that time. Two examples are broadband Asynchronous DSL (ADSL) and the lesser known ATM Passive Optical Network (APON).

    APON was not widely deployed and was soon superseded with the next best thing – Broadband PON (BPON) also known as ITU-T G.983 as it was developed under the auspices of the ITU. More importantly APON was limited to the number of data channels it could handle and BPON added Wave Division Multiplex (WDM) (covered in Technology Inside in Making SDH, DWDM and packet friendly). BPON uses one wavelength for 622Mbit/s downstream traffic and another for 155-Mbit/s upstream traffic.

    If there are 32 subscribers on the system, that bandwidth is divided among the 32 subscribers-plus overhead. Upstream, a BPON system provides 3 to 5 Mbits/sec when fully loaded.

    GPON is the latest upgrade from this stable, uses an SDH data framing standard and provides a data rate of 2.5Gbit/s downstream and 1.25-Gbit/s upstream. The big technical difference is that GPONs are based on Ethernet and IP rather than ATM.

    It is likely that GPON will find its natural home in the USA and Europe. An example is Verizon who is deploying 622Mbit/s BPON to its subscribers but is committed to upgrade to GPON within twelve months. In the UK, BT’s OpenReach has selected GPON for a trial.

    About Ethernet PON (EPON)

    EPONs comes from the IEEE stable and is called IEEE 802.3ah. EPONs are based on Ethernet standards and derives the benefits of using this commonly adopted technology. EPON only uses a single fibre between the subscriber split and the central office and does not require any power in the field such as needed if a kerb-side equipment was required. EPON also supports downstream Point to Multipoint (P2MP) broadcast which is very important for broadcasting video. As with carrier-grade Ethernet standards such as PBB, some core Ethernet features such as CSMA/CD have been dropped in this new use of Ethernet. Only one subscriber at a time is able to transmit at any time using a Time Division Multiplex Access (TDMA) protocol.

    Typical deployment is shown in the picture below, one fibre to the exchange connecting 32 subscribers.

    EPON architecture (Source: IEEE)

    A Metro Ethernet Forum overview of EPON can be found here.

    The Far East, especially Japan, has taken EPON to its heart with the vast majority being installed by NTT, the major Japanese incumbent carrier, followed by Korea Telecom with 100s of thousands of EPON connections.

    Roundup

    There is still lots of development taking place in the world of PONs. On one hand 10Gbit/s EPON is being talked about to give it an edge over 2.5Gbit/s GPON. On the other, WDM PONs are being trialled in the Far East which would enable far higher bandwidths to be delivered to each home. WDM-PON systems allocate a separate wavelength to each subscriber, enabling the delivery of 100 Mbits/s or more .

    Only this month it was announced that a Japanese MSO Moves 160 Mbit/s using advanced cable technology (the subject of a future TechnologyInside post).

    DSL based broadband suffers from a pretty major problem. The farther the subscriber is away their local exchange, the lower the data rate that can be supported reliably, PONs do not have this limitation (well technically they do but the distance is much greater). So in the race to increase data rates in the home PONs are a clear cut winner along with cable technologies such as DOCSIS 3.0 used by cable operators.

    Personally, I would not expect that PON deployment will increase over and above its snail-like pace in Europe at any time in the near future. Expect to see the usual trials announced by the largest incumbent carriers such as BT, FT and DT but don’t hold your breath waiting for it to arrive at your door. This has been questioned recently in a government report where the lack of high-speed internet access could jeopardise the UK’s growth in future years.

    You may think so what – “I’m happy with 2 – 7Mbit/s ADSL!”, but I can say with confidence that you should not be happy. The promise of IPTV services are really starting to be delivered at long last and encoding bandwidths of 1 to 2Mbit’s really do not cut the mustard in the quality race. This is case for standard, let alone for high definition TV. Moreover, with each family member having a computer and television in their own room and each wanting to watch or listen to their own programmes simultaneously, low speed ADSL connections are far from adequate.

    One way out of this is to bond multiple DSL lines together to gain that extra bandwidth. I wrote a post a few weeks ago – Sharedband: not enough bandwidth? – who provides software to do just this. The problem is that you would require an awful lot of telephone lines to get 100Mbit/s which is what I really want! Maybe I should emigrate?

    Addendum #1: Economist Intelligence Unit: 2007 e-readiness rankings


    GMPLS and common control

    April 16, 2007

    From small beginnings MultiProtocol Label Switching (MPLS) has come a long way in ten years. Although there are a considerable number of detractors who believe it costly and challenging to manage, it has now been deployed in just about all carriers around the world in one guise or another (MPLS-TE) as discussed in The rise and maturity of MPLS. Moreover, it is now extending its reach down the stack into the optical transmission world through activities such as T-MPLS covered in PBB-TE / PBT or will it be T-MPLS? (Picture: GMPLS: Architecture and Applications by Adrian Farrel and Igor Bryskin).In the same way that early SDH standards did not encompass appropriate support for packet based services as discussed in Making SDH, DWDM and packet friendly, initial MPLS standards were firmly focussed on IP networks not for use with optical wavelength or TDM switching.

    The promise of MPLS was to bring the benefits of a connection-oriented regime to the inherently connectionless word of IP networks and be able to send traffic along pre-determined paths thus improving performance. This was key for the transmission of real time or isochronous services such as VoIP over IP networks. Labels attached to packets enabled the creation of Label Switched paths (LSPs) which packets would follow through the network. Just as importantly, it was possible to specify the quality of service (QoS) of an LSP thus enabling the prioritisation of traffic based on importance.

    It was inevitable that MPLS would be extended to enable it to be applied to the optical world and this is where the IETF’s Generalised MPLS (GMPLS) standards comes in. Several early packet and data transmission standards bundled together signalling and data planes in vertical ‘stove-pipes’ creating services that needed to be managed from top to bottom completely separately from each other.

    The main vision of GMPLS was to create a common control plane that could be used across multiple services and layers thus considerably simplifying network management by automating end-to-end provisioning of connections and centrally managing network resources. In essence GMPLS extends MPLS to cover packet, time, wavelength and fibre domains. A GMPLS control plane also lies at the heart of T-MPLS replacing older proprietary optical Operational Support Systems (OSS) supplied by optical equipment manufacturers. GMPLS provides all the capabilities of those older systems and more.

    GMPLS is also often referred to as Automatic Switched Transport Network (ASTN) although GMPLS is really the control plane of an ASTN.

    GMPLS extends MPLS functionality by creating and provisioning:

    • Time Division Multiplex (TDM) paths, where time slots are the labels (SONET / SDH).
    • Frequency Division Multiplex (FDM) paths, where optical frequency such as seen in WDM systems is the label.
    • Space Division Multiplexed (SDM) paths, where the label indicates the physical position of data – photonic Cross-connects
    Switching Domain Traffic Type Forwarding Scheme Example of Device
    Packet, cell IP, ATM Label IP router, ATM switch
    Time TDM SONET/SDH Time slot Digital cross-connects
    Wavelength Transparent Lambda DWDM
    Physical space Transparent Fiber, line OXC

    GMPLS applicability

    GMPLS has extended and enhanced the following aspects of MPLS:

    • Signalling RSVP-TE and CR–LDP
    • Routing protocols – OSPF–TE and IS-IS-TE

    GMPLS has also added:

    • Extensions to accommodate the needs of SONET / SDH and optical networks.
    • A new protocol, link-management protocol (LMP), to manage and maintain the health of the control and data planes between two neighbouring nodes. LMP is an IP-based protocol that includes extensions to RSVP–TE and CR–LDP.

    As GMPLS is used to control highly dissimilar networks operating at different levels in the stack, there are a number of issues it needs to handle in a transparent manner:

    • It does not just forward packets in routers, but needs to switch in time, wavelength or physical ports (space) as well.
    • It should work with all applicable switched networks OTN, SONET / SDH, ATM and IP etc.
    • There are still many switches that are not able to inspect traffic and thus not able to extract labels – this is especially true for TDM and optical networks.
    • It should facilitate dissimilar network interoperation and integration.
    • Packet networks work at a finer granularity than optical networks – it would not make sense to allocate a 622Mbit/s SDH link to a 1Mbit/s video IP stream by mistake.
    • There is a significant difference in scale between IP and optical networks from a control perspective – optical networks being much larger with thousands of wavelengths to manage.
    • There is often a much bigger latency in setting up an LSP on an optical switch than there is on an IP router.
    • SDH and SONET systems can undertake a fast switch restoration in less than 50mS in case of failure – a GMPLS control plane needs to handle this effectively.

    Round-up

    GMPLS / ASTN is now well entrenched in the optical telecommunications industry with many, if not most, of the principle optical equipment manufacturers demonstrating compatible systems.

    It’s easy to see the motivation to create a common control plane (GMPLS was defined under the auspices of the IETF’s Common Control and Measurement Plane (ccamp) working group) as it would would considerably reduce the complexity and cost of managing fully converged Next Generation Networks (NGNs). Indeed, it is hard to see how any carrier could implement a real converged network without it.

    As discussed in Path Computation Element (PCE): IETF’s hidden jewel converged NGNs will need to compute service paths across multiple networks, across multiple domains and automatically pass service provision at the IP layer down to optical networks such as SDH and ASTN. Again, it is hard to see how this vision can be implemented without a common control plane and GMPLS.

    To quote the concluding comment in GMPLS: The Promise of the Next-Generation Optical: Control Plane (IEEE Communiction Magazine July 2005 Vol.43 No.7):

    “we note that far from being abandoned in a theoretical back alley, GMPLS is very much alive and well. Furthermore, GMPLS is experiencing massive interest from vendors and service providers where it is seen as the tool that will bring together disparate functions and networks to facilitate the construction of a unified high-function multilayer network operators will use as the foundation of their next-generation networks. Thus, while the emphasis has shifted away from the control of transparent optical networks over the last few years, the very generality of GMPLS and its applicability across a wide range of switching technologies has meant that GMPLS remains at the forefront of innovation within the Internet. “