The new network dogma: Has the wheel turned full circle?

August 26, 2008

An authoritative principle, belief, or statement of ideas or opinion, especially one considered to be absolutely true”

When innovators proposed Internet Protocol (IP) as the universal protocol for carriers in the mid 90s, they met with furious resistance from the traditional telecommunications community. This post asks whether the wheel has now turned full circle with new innovative approaches often receiving the same reception.

Like many others, I have found the telecommunications industry so very interesting and stimulating over the last decade. There have been so many profound changes that it is hard to indentify with the industry that existed prior to the new religion of IP that took hold in the late 90s. In those balmy days the industry was commercially and technically controlled by the robust world standards of the Public Switched Telecommunications Services (PSTN).

In some ways it was a gentleman’s industry where incumbent monopoly carriers ruled their own lands and had detailed inter-working agreements with other telcos to share the end-to-end revenue generated by each and every telephone call. To enable these agreements to work, the International Telecommunications Union (ITU) in Geneva spent decades defining the technical and commercial standards that greased the wheels. Life was relatively simple as there was only one standards body and one set of rules to abide by. The ITU is far from dead of course and the organisation went on to develop the highly successful GSM standard for mobile telephony and is still very active defining standards to this very day.

In those pre-IP days, the industry was believed to be at its nadir with high revenues, similarly high profits with every company having its place in the universe. Technology had not significantly moved on for decades ( though this does an injustice to the development of ATM and SDH/SONET) and there was quite a degree of complacency driven by a monopolistic mentality. Moreover, it was very much a closed industry in that individuals chose to spend their entire careers in telecommunications from a young age with few outsiders migrating into it. Certainly few individuals with an information technology background joined telcos as there was a significant mismatch in technology, skills and needs. It was not until the mid 90s, when the industry started to use computers by adopting Advanced Intelligent Networks (AIN) and Operational Software and Systems (OSS), that computer literate IT engineers and programmers saw new job opportunities and jumped aboard.

In many ways the industry was quite insular and had its own strong world view of where it was going. As someone once said, “the industry drank its own bathwater” and often chose to blinker out opposing views and changing reality. It is relatively easy to see how this came about with hindsight. How could an industry that was so insular embrace disruptive technology innovation with open arms? The management dogma was all about “We understand our business, our standards and our relationships. We are in complete control and things won’t change.”

Strong dogma dominated and was never more on show than in the debate about the adoption of Asynchronous Transfer Mode (ATM) standards that were needed to upgrade the industry’s switching networks. If ATM had been developed a decade earlier there would have never been an issue but unfortunately the timing could not have been worse as it coincided with the major uptake of IP in enterprises. When I first wrote about ATM back in 1993, IP was pretty much an unknown protocol in Europe. (The demise of ATM ). ATM and the telco industry lost that battle and IP has never looked back.

In reality it was not so much a battle but all out war. It was the telecommunications industry eyeball-to-eyeball with the IT industry. The old “we know best” dogma did not triumph and the abrupt change in industry direction led to severe trauma  in all sections of the industry. Many old-style telecommunications equipment vendors, who had focused on ATM with gusto, failed to adapt with many either writing off billions of Dollars or being sold at knock-down valuations. Of course, many companies made a killing. Inside telcos, commercial and engineering management who had spent decades at the top of their profession, found themselves floundering and over a fifteen year period a significant proportion of that generation of  management ended up leaving the industry.

The IP band wagon had started rolling and its unstoppable inertia has relentlessly driven the industry through to the current time. Interestingly, as I have covered in previous posts such as MPLS and the limitations of the Internet, not all the pre-IP technologies were dumped. This was particularly so with fundamental transmission related network technologies such as SDH / SONET (SDH, the great survivor). These technologies were 100% defined within the telecommunications world and provided capabilities that were wholly lacking in IP. IP may have been perfect for enterprises, but many capabilities were missing that were required if it was to be used as the bedrock protocol in the telecommunications industry. Such things as:

  • Unlike telecommunications protocols, IP networks were proud that their networks were non-deterministic. This meant that packets would always find their way to the required destination even if the desired path faulted. In the IP world this was seen as a positive feature. Undoubtedly it was, but it also meant that it was not possible to predict the time it would take for a packet to transit a network. Even worse, a contiguous stream of packets could arrive at a destination via different paths. This was acceptable for e-mail traffic but a killer for real-time services like voice.
  • Telecommunications networks required high reliability and resilience so that in event of any failure, automatic switchover to an alternate route would occur within several milliseconds so that even live telephone calls were not interrupted. In this situation IP would lackadaisically find another path to take and packets would eventually find their way to their destination (well maybe that is a bit of an overstatement, but it does provide a good image of how IP worked!).
  • Real time services require a very high Quality of Service (QoS) in that latency, delay, jitter and drop-out of packets need to be kept to an absolute minimum. This was, and is, a mandatory requirement for delivery of demanding voice services. IP in those days did not have the control signalling mechanisms to ensure this.
  • If PSTN voice networks had one dominant characteristic – it was reliable. Telephone networks just could not go down. There were well engineered and extensively monitored so if any fault occurred comprehensive network management systems flagged it very quickly to enable operational staff to correct it or provide a work round. IP networks just didn’t have this level of capability of operational management systems.

These gaps in capabilities in the new IP-for-everything vision needed to be corrected pretty quickly, so a plethora of standards development was initiated through the IETF that remains in full flow to this day. I can still remember my amazement in the mid 1990s when I came across a company had come up with the truly innovative idea to combine the deterministic ability of ATM with an IP router that brought together the best of the old with the new still under-powered IP protocol (The phenomenon of Ipsilon). This was followed by Cisco’s and the IETF’s development of MPLS and all its progeny protocols. (The rise and maturity of MPLS and GMPLS and common control).

Let’s be clear, without these enhancements to basic IP, all the benefits the telecommunications world gained from focusing on IP would not have been realised. The industry should be making a huge sigh of relief as many of the required enhancements were not developed until after the wholesale industry adoption of IP. If IP itself had not been sufficiently adaptable, it could be conjectured that there would have been one of the biggest industry dead ends imaginable and all the ‘Bellheads’ would have been yelling “I told you so!”.

Is this the end of story?

So, that’s it then, it’s all done. Every carrier of every description, incumbent, alternate, global, regional, mobile, and virtual has adopted IP / MPLS and everything is hunky-dory. We have the perfect set of network standards and everything works fine. The industry has a clear strategy to transport all services over IP and the Next Generation Network (NGN) architecture will last for several decades.

This may very well turn out to be the case and certainly IP /MPLS will be the mainstream technology set for a long time to come and I still believe that this was one of the best decisions the industry took in recent times. However, I cannot help asking myself whether if we have not gone back to many of the same closed industry attitudes that drove it prior to the all-pervasive adoption of IP?

It seems to me that it is now not the ‘done thing’ to propose alternative network approaches or enhancements that do not exactly coincide with the now IP way of doing things for risk of being ‘flamed’. For me the key issue that should drive network architectures should be simplicity and nobody could use the term ‘simple’ when describing today’s IP carrier networks. Simplicity means less opportunity for service failure and simplicity means lower cost operating regimes. In these days of ruthless management cost-cutting, any innovation that promises to simplify a network and thus reduce cost must have merit and should justify extensive evaluation – even if your favourite vender disagrees. To put it simply, simplicity cannot not come from deploying more and more complex protocols that micro-manage a network’s traffic.

Interestingly, in spite of there being a complete domination of public network cores by MPLS, there is still one major area where the use of MPLS is being actively questioned – edge and or metro networks. There is currently quite a vibrant discussion taking place concerning the over complexity of MPLS for use in metro and the possible benefits of using IP over Ethernet (Ethernet goes carrier grade with PBT / PBB-TE?). More on this later.

We should also not forget that telcos have never dropped other aspects of the pre-IP world. For example, the vast majority of telcos who own physical infrastructure still use that leading denizen of the pre-IP world, Synchronous Digital Hierarchy (SDH or SONET) (SDH, the great survivor). This friendly dinosaur of a technology still holds sway at the layer-1 network level even though most signalling and connectivity technologies that sit upon it have been brushed aside by the IP family of standards. SDH’s partner in crime, ATM, was absorbed by IP through the creation of standards that replicated its capabilities in MPLS (deterministic routing) and MPLS-TE (fast rerouting). The absorption of SDH into IP was not such a great success as many of the capabilities of SDH could not effectively be replaced by layer-3 capabilities (though not for the want of trying!).

SDH is based on time division multiplexing (TDM), the pre-IP packet method of sharing a defined amount of bandwidth between a number of services running over an individual wavelength on a fibre optic cable. The real benefit of this multiplexing methodology is that it had proved to be ultra-reliable and offers the very highest level of Quality of Service available. SDH also has the in-built ability par-excellence to provide restoration of an inter-city optical cable in the case of major failure. One of SDH’s limitations however, is that it only operates at very high granularity of bandwidth so smaller streams of traffic more appropriate to the needs of individuals and enterprises cannot be managed through SDH alone. This capability was provided by ATM and is now provided by MPLS.

Would a moment of reflection be beneficial?

The heresy that keeps popping up in my head when I think about IP and all of its progeny protocols, is that the telecommunications industry has spent fifteen years developing a highly complex and inter-dependent set of technical standards that were only needed to effectively replace what was a ‘simple’ standard that did its job effectively at a lower layer in the network. Indeed, pre MPLS, many of the global ISPs used ATM to provide deterministic management of the global IP networks.

Has the industry now created a highly over-engineered and over-complex reference architecture? Has a whole new generation of staff been so marinaded for a decade in deep IP knowledge, training and experience that it’s for an individual to question technical strategy? Has the wheel has turned full circle?

In my post Traffic Engineering, capacity planning and MPLS-TE, I wrote about some of the challenges facing the industry and the carriers’ need to undertake fine-grain traffic engineering to ensure that individual service streams are provided with appropriate QoS. As consumers start to use the Internet more and more for real-time isochronous services such as VoIP and video streaming, there is a major architectural concern about how this should be implemented. Do carriers really want to continue to deploy an ever increasing number of protocols that add to the complexity of live networks and hence increase risk?

It is surprising just how many carriers use only very light traffic engineering and simply rely on over-provisioning of bandwidth at a wavelength level. This may be considered to be expensive (but is it if they own the infrastructure?) and architects may worry about how long they will be able to continue to use this straightforward approach, but there does seem to be a real reticence to introduce fine-grained traffic management. I have been told several times that this is because they do not trust some of the new protocols and it would be too risky to implement them. It is industry knowledge that a router’s operating system contains many features that are never enabled and this is as true today as it was in the 90s.

It is clear that management of fine-grain traffic QoS is one of the top issues to be faced in coming years. However, I believe that many carriers have not even adopted the simplest of traffic engineering standards in the form of MPLS-TE that starts to address the issue. Is this because many see that adopting these standards could create a significant risk to their business or is it simply fear, uncertainty and doubt (FUD)?

Are these some of the questions carriers we should be asking ourselves?

Has management goals moved on since the creation of early MPLS standards?

When first created, MPLS was clearly focused on providing predictable determinability at layer-3 so that the use of ATM switching could be dropped to reduce costs. This was clearly a very successful strategy as MPLS now dominates the core of public networks. This idea was very much in line with David Isenberg’s ideas articulated in The Rise of the Stupid Network in 1997 which we were all so familiar with at the time. However ambitions have moved on, as they do, and the IP vision was considerably expanded. This new ambition was to create a universal network infrastructure that could provide any service using any protocol that any customer was likely to need or buy. This was called an NGN.

However, is that still a good ambition to have? The focus these days is on aggressive cost reduction and it makes sense to ask whether an NGN approach could ever actually reduce costs compared to what it would replace. For example, there are many carriers today who wish to exclusively focus on delivering layer-2 services. For these carriers, does it make sense to deliver these services across a layer 3 based network? Maybe not.

Are networks so ‘on the edge’ that they have to be managed every second of the day?

PSTN networks that pre-date IP were fundamentally designed to be reliable and resilient and pretty much ran without intervention once up and running. They could be trusted and were predictable in performance unless a major outside event occurred such as a spade cutting a cable.

IP networks, whether they be enterprise or carrier, have always had an well-earned image of instability and going awry if left alone for a few hours. This is much to do with the nature of IP and the challenge of managing unpredicted traffic bursts. Even today, there are numerous times when a global IP network goes down due to an unpredicted event creating knock-on consequences. A workable analogy would be that operating an IP network is similar to a parent having to control an errant child suffering from Attention Deficit Disorder.

Much of this has probably been brought about by the unpredictable nature of routing protocols selecting forwarding paths. These protocols have been enhanced over the years by so many bells and whistles that a carrier’s perception of the best choice of data path across the network will probably be not the same as the one selected by the router itself.

Do operational / planning architecture engineers often just want to “leave things as they are” because it’s working. Better the devil you know?

When a large IP network is running, there is a strong tendency to want to leave things well alone. Is this because there are so many inter-dependent functions in operation at any one time that it’s beyond an individual to understand it? Is it because when things go wrong it takes such an effort to restore service and it’s often impossible to isolate the root cause if it not down to simple hardware failure?

Is risk minimisation actually the biggest deciding factor when deciding what technologies to adopt?

Most operational engineers running a live network want to keep things as simple as possible. They have to because their job and sleep are on the line every day. Achieving this often means resisting the use of untried protocols (such as MPLS-TE) and replacing fine-grained traffic engineering with the much simpler strategy of using over-provisioned networks ( Telcos see it as a no-brainer because they already own the fibre in the ground and it is relatively easy to light an additional dark wavelength).

At the end of the day, minimising commercial risk is right at the top of everyone’s agenda, though it usually sits below operation cost reduction.

Compared to the old TDM networks they replace, are IP-based public networks getting too complex to manage when considering the ever increasing need for fine-grain service management at the edge of the network?

The spider’s web of protocols that need to perform flawlessly in unison to provide a good user experience is undoubtedly getting more and more complex as time goes by. There is only little effort to simply things and there is a view that it is all becoming too over-engineered. Even if a new standard has been ratified and is recommended for use, this does not mean it will be implemented in live networks on a wide scale basis. The protocol that heads the list of under exploited protocols is IPv6 (IPv6 to the rescue – eh?).

There is significant on-going standards development activity in the space of path provisioning automation (Path Computation Element (PCE): IETF’s hidden jewel) and of true multilayer network management. This would include seamless control of layer-3 (IP), layer-2.5 (MPLS) and layer-1 networks (SDH) (GMPLS and common control). The big question is (risking being called a Luddite) would a carrier in the near future risk the deployment of such complexity that could bringing down all layers of a network at once? Would the benefits out weigh the risk?

Are IP-based public networks more costly to run than legacy data networks such as Frame Relay?

This is a question I would really like to get an objective answer to as my current views are mostly based on empirical and anecdotal data. If anyone has access to definitive research, please contact me! I suspect, and I am comfortable with the opinion until proved wrong, that this is the case and could be due to the following factors:

  • There needs to be more operations and support staff permanently on duty than with the old TDM voice systems thus leading to higher operational costs.
  • Operational staff require a higher level of technical skill and training caused by the complex nature of IP. CCIEs are expensive!
  • Equipment is expensive as the market is dominated by only a few suppliers and there are often proprietary aspects of new protocols that will only run on a particular vendor’s equipment thus creating effective supplier lock-in. The router clone market is alive and healthy!

It should be remembered that the most important reason given to justify the convergence on IP was the cost savings resulting from collapsing layers. This has not really taken place except for the absorption of ATM into MPLS. Today, each layer is still planned, managed and monitored by separate systems. The principle goal of a Next Generation Network (NGN) architecture is still to achieve this magic result of reduced costs. Most carriers are still waiting on the fence for evidence of this.

Is there a degradation in QoS using IP networks?

This has always been a thorny question to answer and a ‘Google’ to find the answer does not seem to work. Of course, any answer lies in the eyes of the beholder as there is no clear definition of what the term QoS encompasses. In general, the term can be used at two different levels in relation to a network’s performance: micro-QoS and the macro-QoS.

Micro-QoS is concerned with individual packet issues such as order of reception of packets, number of missing packets, latency, delay and jitter. An excessive amount of any of these will severely degrade a real-time service such as VoIP or video streaming. Macro-QoS is more concerned with network wide issues such as network reliability and resilience and other areas that could affect overall performance and operational efficiency of a network.

My perspective is that on a correctly managed IP / MPLS network (with all the hierarchy and management that requires), micro-QoS degradation is minimal and acceptable and certainly no worse than IP over SDH. Indeed, many carriers deliver traditional private wire services such as E1 or T1 connectivity over an MPLS network using pseudowire tunnelling protocols such as Virtual Private LAN Service (VPLS). However this does significantly raise the bar in respect to the level of IP network design and network management quality required.

The important issue is the possible degradation at the macro-QoS level where I am comfortable with the view that using an IP / MPLS network there will always be a statistically higher risk of fault or problems due to its complexity compared to a simpler IP over SDH system. There is a certain irony in that macro-QoS performance of a network could be further degraded when additional protocols are deployed to improve-micro-QoS performance.

Is there still opportunity for simplification?

In an MPLS dominated world, there is still significant opportunity for simplification and cost reduction.

Carrier Ethernet deployment

I have written several posts (Ethernet goes carrier grade with PBT / PBB-TE?) about carrier Ethernet standards and the benefits its adoption might bring to public network. In particular, the promise of simplification. To a great extent this interesting technology is a prime example of where a new (well newish) approach that actually does make quite a lot of sense comes up against the new MPLS-for-everything-and-everywhere dogma. It is not just a question of convincing service providers of the benefit but also overcoming the almost overwhelming pressure brought on carrier management form MPLS vendors who have clear vested interests in what technologies their customers choose to use. This often one-sided debate definitely harks back to the early 90s no-way-IP culture. Religion is back with a vengeance.

Metro networks

Let me quote Light Reading from September 2007. What once looked like a walkover in the metro network sector has turned into a pitched battle – to the surprise, but not the delight, of those who saw Multiprotocol Label Switching (MPLS) as the clear and obvious choice for metro transport.” MPLS has encountered several road bumps on its way to domination and it should always be appropriate to question whether any particular technology adoption is appropriate.

To quote the column further:The carrier Ethernet camp contends that MPLS is too complex, too expensive, and too clunky for the metro environment.” Whether ‘thin MPLS’ (PBB-TE / PBT or will it be T-MPLS?) will hold off the innovative PBB intruder remains to be seen. At the end of the day, the technology that provides simplicity and reduced operational costs will win the day.

Think the unthinkable

As discussed above, the original ambition of MPLS has ballooned over the years. Originally solving the challenge of how to provide a deterministic and flexible forwarding methodology for layer-3 IP packets and replace ATM, it has achieved this objective exceptionally well. These days, however, it seems to be always assumed that some combination or mix of Ethernet (PBB-TE) and/or MPLS-TE and maybe even GMPLS is the definitive, but highly complex, answer to creating that optimum highly integrated NGN architecture that can be used to provide any service any customer might require.

Maybe, it is worth considering a complementary approach that is highly focused on removing complexity. There is an interesting new field of innovation that is proposing that path forwarding ‘intelligence’ and path bandwidth management is moved from layer-3, layer.2.5 and layer-2 back into layer-1 where it rightly belongs. By adding additional capability to SDH, it is possible to reduce complexity in the above layers. In particular deployment scenarios this could have a number of major benefits, most of which result in significantly lower costs.

This raises an interesting point to ponder. While revenues still derive from traditional telecom-oriented voice services, the services and applications that are really beginning to dominate and consume most bandwidth are real time interactive and streaming services such as IPTV, TV replays, video shorts, video conferencing, tele-presence, live event broadcasting, tele-medicine, remote monitoring etc. It could be argued that all these point-to-point and broadcast services could be delivered with less cost and complexity using advanced SDH capabilities linked with Ethernet or IP / MPLS access? Is it worth thinking about bringing SDH back to the NGN strategic forefront where it could deliver commercial and technical benefits?

To quote a colleague: “The datacom protocol stack of IP-over-Ethernet was designed for asynchronous file transfer, and Ethernet as a local area network packet-switching protocol, and these traditional datacom protocols do a fine job for those applications (i.e. for services that can tolerate uncertain delays, jitter and throughput, and/or limited-scope campus/LAN environments). IP-over-Ethernet was then assumed to become the basis protocol stack for NGNs in the early 2000s, due to the popularity of that basic datacom protocol stack for delivering the at-that-time prevailing services carried over Internet, which were mainly still file-transfer based non-real-time applications.”

SDH has really moved on since the days when it was only seen as a dumb transport layer. At least one service provider company, Optimum Communications Services offers an innovative vision whereby instead of inter-node paths being static, as is the case with the other NGN technologies discussed in this post, the network is able to dynamically determine the required inter-node bandwidth based on a fast real-time assessment of traffic demands between nodes.

So has the wheel has turned full circle?

As most carriers’ architectural and commercial strategies are wholly focused on IP with the Yellow Brick Road ending with the sun rising over a fully converged NGN, how much real willingness is there to listen to possible alternate or complementary innovative ideas?

In many ways the telecommunications industry could be considered to have returned to the closed shutter mentality that dominated before IP took over in the late 1990s – I hope that this is not the case. There is no doubt that choosing to deploy IP / MPLS was a good decision, but a decision to deploy some of the derivative QoS and TE protocols is far from clear cut.

We need to keep our eyes and minds open as innovation is alive and well and most often arises in small companies who are free to think the the unthinkable. They may might not be always right but they may not be wrong either. Just cast your mind back to the high level of resistance encountered by IP in the 90s and let’s not repeat that mistake again. There is still much scope for innovation within the IP based carrier network world and I would suspect this has everything to do with simplifying networks and not complicating them further.

Addendum #1: Optimum Communications Services – finally a way out of the zero-sum game?

IPv6 to the rescue – eh?

June 21, 2007

To me, IPv6 is one of the Internet’s real enigmas as the supposed replacement of the the Internet’s ubiquitous IPv4. We all know this has not happened.

The Internet Protocol (IPv4) is the principle protocol that lies behind the Internet and it originated before the Internet itself. In the late 1960s there was a need in a number of US universities to exchange data and an interest in developing the new network technologies, switching capabilities and protocols required to achieve this.

The result of this was the formation of the Advanced Research Project Agency a US government body who started developing a private network called ARPANET which metamorphosed into the Defense Advanced Research Projects Agency (DARPA). The initial contract to develop the network was won by Bolt, Beranek and Newman (BBN) which was eventually bought by Verizon and sold to two private equity companies in 2004 to be renamed BBN Technologies.

The early services required by the university consortium were file transfer, email and the ability to remotely log onto university computers. The first version of the protocol was called the Network Control Protocol (NCP) and saw the light of day in 1971.

In 1973, Vince Cerf, who worked on NCP (now Chief Internet Evangelist at Google), and Robert Kahn ( who previously worked on the Interface Message Processor [IMP]) kicked off a program to design a next generation networking protocol for the ARPANET. This activity resulted in the the standardisation through ARPANET Requests For Comments (RFCs) of TCP/IPv4 in 1981 (now IETF RFC 760).

IPv4 uses a 32-bit address structure which we see most commonly written in dot-decimal notation such as aaa.bbb.ccc.ddd representing a total of 4,294,967,296 unique addresses. Not all of these are available for public use as many addresses are reserved.

An excellent book that pragmatically and engagingly goes through the origins of the Internet in much detail is Where Wizards Stay Up Late – it’s well worth a read.

The perceived need for upgrading

The whole aim of the development of of IPv4 was to provide a schema to enable global computing by ensuring that computers could uniquely identify themselves through a common addressing scheme and are able to communicate in a standardised way.

No matter how you look at it, IPv4 must be one of the most successful standardisation efforts to have ever taken place if measured by its success and ubiquity today. Just how many servers, routers, switches, computers, phones, and fridges are there that contain an IPv4 protocol stack? I’m not too sure, but it’s certainly a big, big number!

In the early 1990s, as the Internet really started ‘taking off’ outside of university networks, it was generally thought that the IPv4 specification was beginning to run out of steam and would not be able to cope with the scale of the Internet as the visionaries foresaw. Although there were a number of deficiencies, the prime mover for a replacement to IPv4 came from the view that the address space of 32 bits was too restrictive and would completely run out within a few years. This was foreseen because it was envisioned, probably not wrongly, that nearly every future electronic device would need its own unique IP address and if this came to fruition the addressing space of IPv4 would be woefully inadequate.

Thus the IPv6 standardisation project was born. IPv6 packaged together a number of IPv4 enhancements that would enable the IP protocol to be serviceable for the 21st century.

Work was started 1992/3 and by 1996 a number of RFCs were released starting with RFC 2460. One of the most important RFCs to be released was RFC 1933 which specifically looked at the transition mechanisms of converting IPv4 networks to IPv6. This covered the ability of routers to run IPv4 and IPv6 stacks concurrently – “dual stack” – and the pragmatic ability to tunnel the IPv6 protocol over ‘legacy’ IPv4 based networks such as the Internet.

To quote RFC 1933:

This document specifies IPv4 compatibility mechanisms that can be implemented by IPv6 hosts and routers. These mechanisms include providing complete implementations of both versions of the Internet Protocol (IPv4 and IPv6), and tunnelling IPv6 packets over IPv4 routing infrastructures. They are designed to allow IPv6 nodes to maintain complete compatibility with IPv4, which should greatly simplify the deployment of IPv6 in the Internet, and facilitate the eventual transition of the entire Internet to IPv6.

The IPv6 specification contained a number of areas of enhancement:

Address space: Back in the early 1990s there was a great deal of concern about the lack of availability of public IP addresses. With the widespread uptake of IP rather than ATM as the basis of enterprise private networks as discussed in a previous post The demise of ATM, most enterprises had gone ahead and implemented their networks with any old IP address they cared to use. This didn’t matter at the time because those networks were not connected to the public Internet so it did’nt matter whether other computers or routers had selected the same addresses.

It first became a serious problem when two divisions of a company tried to interconnect within their private network and found that both divisions had selected the same default IP addresses and could not connect. This was further compounded when those companies wanted to connect to the Internet and found that their privately selected IP addresses could not be used in the public space as they had been allocated to other companies.

The answer to this problem was to increase the IP protocol addressing space to accommodate all the private networks coming onto the public network. Combined with the vision that every electronic device could contain an IP stack, IPv6 increased the address space to 128 bits rather than IPv4’s 32 bits.

Headers: Headers in IPv4 (headers precede data in the packet flow and contain routing and other information about the data) were already becoming unwieldy so the addition of extra data in the headers necessitated by IPv6 would not help things by increasing a minimum 20byte header to 80 bytes. IPv6 headers are simplified by enabling headers to be chained together and only used when needed. IPv4 has a total of 10 fields, while IPv6 has only 6 and no options.

Configuration: Managing an IP network is pretty much of a manual exercise with few tools to automate the activity beyond tools such as DCHP (the automatic allocation of IP addresses for computers). Network administrators seem to spend most of the day manually entering IP addresses into fields in network management interfaces which really does not make much use of their skills.

IPv6 has incorporated enhancements to enable a ‘fully automatic’ mode where the protocol can assign an address to itself without human intervention. The IPv6 protocol will send out a request to enquire whether any other device has the same address. If it receives a positive reply it will add a random offset and ask again until it receives no rely. IPv6 can also identify nearby routers and automatically identify if a local DHCP server ID available.

Quality of Service: IPv6 has embedded enhancements to enable the prioritisation of certain classes of traffic by assigning a value to a packet in the field labelled Drop Priority.

Security: IPv6 incorporates IP-Sec to provide authentication and encryption to improve the security of packet transmission and is handled by the Encapsulating Security Payload (ESP).

Multicast: Multicast addresses are group addresses so that packets can be sent to a group rather than an individual. IPv4 handles this very inefficiently while IPv6 has implemented the concept of a multicast address into its core.

So why aren’t we all using IPv6?

The short answer to this question is that IPv4 is a victim of its own success. The task of migrating the Internet to IPv6, even taking into to account the available migration options of dual stack hosting and tunnelling, is just too challenging.

As we all know, the Internet is made up of thousands of independently managed networks each looking to commercially thrive or often just to survive. There is no body overseeing how the Internet is run except for specific technical aspects such as Domain Name Server (DNS) management or the standards body, IETF. (Picture credit: The logo of Linux IPv6 Development Project)

No matter how much individual evangelists push for the upgrade, getting the world to do so is pretty much an impossible task unless everyone sees that there is a distinct commercial and technical benefit for them to do so.

This is the core issue and as the benefits of upgrading to IPv6 have been seriously eroded by the advent of other standards efforts that address each of the IPv6 enhancements on a stand-alone basis. The two principle are NAT and MPLS.

Network address translation (NAT): To overcome the limitation in the number of available public addresses, NAT was implemented. This means that many users / computers in a private network are able to access the public Internet using a single public IP address. Each user is assigned a transient dynamic session IP address when they access the Internet and the NAT software manages the translation between the the public IP address and the dynamic address used within the private network.

NAT effectively addressed the concern that the Internet may run out of address space. It could be argued that NAT is just a short term solution that came at a big cost to users. The principle downside is that external connections are unable to set up long term relationships with an individual user or computer that is behind a NAT wall as they have not been assigned their own unique IP address. Users of the internal dynamically assigned IP addresses can change at any time.

This particularly affects applications that contain addresses so that traffic can always be sent to a specific individual or computer – VoIP is probably the main victim.

It’s interesting to note that the capability to uniquely identify individual computers was the main principle behind the development of IPv4 so it quite easy to see why there is often strong views expressed about NAT!

MPLS and related QoS standards: The advent of MPLS covered in The rise and maturity of MPLS and MPLS and the limitations of the Internet addressed many of the needs of the IP community to be able to address Quality of Service issues by separating high-priority service traffic from low-priority traffic.

Round up

Don’t break what works. IP networks take a considerable amount of skill and hard work to keep alive. They always seem to be ‘living on the edge’ and break down when a network administrator gets distracted. Leave well alone is the mantra by many operational groups.

The benefits of upgrading to IPv6 have been considerably eroded by the advent of NAT and MPLS. Combine this with the lack of an overall management body who could force through a universal upgrade and the innate inertia of carriers and ISPs probably means that IPv6 will never achieve such a dominant position as its progenitor IPv4.

According to one overview of IPv6, which gets to the heart of the subject, “Although IPv6 is taking its sweet time to conquer the world, it’s now showing up in more and more places, so you may actually run into it one of these days.”

This is not to say that IPv6 is dead, rather it is being marginalised by only being run in closed networks (albeit some rather large networks). There is real benefit to the Internet being upgraded to IPv6 as every individual and every device connected to it could be assigned its own unique address as envisioned by the Founders of the Internet. The inability to do this severely constrains services and applications which are not able to clearly identify an individual on an on-going basis as is inherent in a telephone number. This clearly reflects badly on the Internet.

IPv6 is a victim of the success of the Internet and the ubiquity of IPv4 and will probably never replace IPv4 in the Internet in the foreseeable future (Maybe I should never say never!). I was once asked by a Cisco Fellow how IPv6 could be rolled out, after shrugging my shoulders and laughing I suggested that it needed a Bill Gates of the Internet to force through the change. That suggestion did not go down too well. Funnily enough, now that IPv6 is incorporated into Vista we could see the day when this happens. The only fly in the ointment is that Vista has the same problems and challenges as IPv6 in replacing XP – users are finally tiring of never-ending upgrades with little practical benefit.

Interesting times.

The tale of DOCSIS and cable operators

May 2, 2007

When anyone that uses the Internet on a regular basis is presented with an opportunity to upgrade their access speeds they will usually jump at the opportunity without a second thought. There used to be a similar analogy with personal computers with operating systems and processor speeds, but this is a less common trend these days as the benefits to be gained are often ephemeral as we have recently seen with Microsoft’s Vista. (Picture: SWINOG)

However, the advertising headline for many ISPs still focuses on “XX Mbit/s for as little as YY Pounds/month”. Personally, in recent years, I have not seen too many benefits in increasing my Internet access speed because I see little improvement when browsing normal WWW sites as their performance are not now bottlenecked by my access connection but rather the performance of servers. My motivation to get more bandwidth into my home is the need to have sufficient bandwidth – both upstream and downstream – to support my family’s need to use multiple video and audio services at the same time. Yes, we are as dysfunctional as everyone else with computers in nearly every room of the house and everyone wanting to do their own video or interactive thing.

I recently posted an overview of my experience of Joost, the new ‘global’ television channel recently launched by Skype founders, Niklas Zennstrom and Janus Friis – Joost’s beta – first impressions and it’s interesting to note that as a peer-to-peer system it does require significant chunks of your access bandwidth as discussed in Joost: analysis of a bandwidth hog.

The author’s analysis shows that it “pulls around 700 kbps off the internet and onto your screen” and “sends a lot of that data on to other users – about 220 kbps upstream”. If Joost is a window on the future of the IPTV on the Internet, then its should be of concern to the ISP and carrier communities and it should also be of concern to each of us that uses it. 220kbits/s is a good chunk of of the 250kbit/s upstream capability of ADSL-based broadband connections. If the upstream channel is clogged, response time on all services being accessed will be affected. Even more so if several individuals are are access Joost of a single broadband connection.

It’s these issues that make me want to upgrade my bandwidth and think about the technology that I could use to access the Internet. In this space there has been an on-going battle for many years between twisted copper pair ADSL or VDSL used by incumbent carriers and cable technology used by competitive cable companies such as Virgin Media to deliver Internet to your home.

Cable TV networks (CATV) have come a long way since the 60s when they were based on simple analogue video distribution over coaxial cable. These days they are capable of delivering multiple services and are highly interactive allowing in-band user control of content unlike satellite delivery that requires a PSTN based back-channel. The technical standard that enables these services is developed by CableLabs and is called Data Over Cable Service Interface Specification (DOCSIS). This defines the interface requirements for cable modems involved in high-speed data distribution over cable television system networks.

The graph below shows the split between ADSL and Cable based broadband subscribers: (Source: Virgin Media) with Cable trailing ADSL to a degree. The link provided provides an excellent overview of the UK broadband market in 2006 so I won’t comment further here.

A DOCSIS based broadband cable system is able to deliver a mixture of MPEG-based video content mixed with IP enabling the provision of a converged service as required in 21st century homes. Cable systems operate in a parallel universe, well not quite, but they do run a parallel spectrum enclosed within their cable network isolated from the open spectrum used by terrestrial broadcasters. This means that they are able to change standards when required without the need to consider other spectrum users as happens with broadcast services.

The diagram below shows how the spectrum is split between upstream and downstream data flows (Picture: SWINOG) and various standards specify the data modulation (QAM) and bit-rate standards. As is usual in these matters, there are differences between the USA and European standards due to differing frequency allocations and standards – NTSC in the USA and PAL in Europe. Data is usually limited to between 760 and 860MHz.

The DOCSIS standard has been developed by CableLabs and the ITU with input from a multiplicity of companies. The customer premises equipment is called a Cable Modem and the Central Office (Head End) equipment is called the a cable modem termination system (CMTS).

Since 1997there have been various releases (Source: CableLabs) of the DOCSIS standard with the most recent being version 3.0 being released in 2006.

DOCSIS 1.0 (Mar. 1997) (High Speed Internet Access) Downstream: 42.88 Mbit/s and Upstream: 10.24 Mbit/s

  • Modem price has declined from $300 in 1998 to <$30 in 2004

DOCSIS 1.1 (Apr. 1999) (Voice, Gaming, Streaming)

  • Interoperable and backwards-compatible with DOCSIS 1.0
  • “Quality of Service”
  • Service Security: CM authentication and secure software download
  • Operations tools for managing bandwidth service tiers

    DOCSIS 2.0 (Dec. 2001) (Capacity for Symmetric Services) Downstream: 42.88 Mbit/s and Upstream:30.72 Mbit/s

    • Interoperable and backwards compatible with DOCSIS 1.0 / 1.1
    • More upstream capacity for symmetrical service support
    • Improved robustness against interference (A-TDMA and S-CDMA)

    DOCSIS 3.0 (Aug. ’06) Downstream: 160 Mbit/s and Upstream: 120 Mbit/s

    • Wideband services provided by expanding used bandwidth through the use of channel bonding e.g. instead of a single data channel being delivered over a single channel, they are multiplexed over a number of channels. ( A previous post talked about bonding in the ADSL world Sharedband: not enough bandwidth? )
    • Support of IPv6


    With the release of the DOCSIS 3.0 standard it looks like cable companies around the world are now set to be able to upgrade the bandwidth they will be able to offer to their customers in coming years. However, this will be an expensive upgrade for them to undertake with the need to upgrade head end equipment first and then followed by field cable modem upgrades over time. I would hazard a guess that it will be at least five years before the average cable user will be able to see the benefits.

    I also wonder about what price will need to be paid for the benefit of gaining higher bandwidth through channel bonding when there is limited spectrum available for data services on the cable system. A limit in subscriber number scalability?

    I was also interested to read about the possible adoption of IPv6 in DOCSIS 3.0. It was clear to me many years ago that IPv6 would ‘never’ (never say never!) on the Internet because of the scale of the task. It’s best chance would be in closed systems such as satellite access services and IPTV systems. Maybe, cable systems are an another option. I will catch up on IPv6 in a future post.


    March 26, 2007

    And you thought Ethernet was simple! It seems I am following a little bit of an Ethernet theme at the moment, so I thought that I would have a go at listing all (many?) of the ways Ethernet packets can be moved from from one location to another. Personally I’ve always found this confusing as there seems to be a plethora of acronyms and standards. I will not cover wireless standards in this post.

    Like IP (Internet protocol not Intellectual Property!), the characteristics of an Ethernet connection is only as good as the bearer service, it is being carried over and thus most of the standards are concerned with that aspect. Of course, IP is most often carried over Ethernet so performance characteristics of the Ethernet data path bleed through to IP as well. Aspects such as service resilience and Quality of Service (QoS) are particularly important.

    Here are the ways that I have come across to transport Ethernet.

    Native Ethernet

    Native Ethernet in its original definition runs over twisted-pair, coaxial cables or fibre (Even though Metcalfe called their cables The Ether). A core feature called carrier sense multiple access with collision detection (CSMA/CD) enabled multiple computers to share the same transmission medium. Essentially this works by a node resending a packet when it did not arrive at its destination because it was lost by colliding with a packet sent from another node at the same time. This is one of the principle aspect of native Ethernet that is when used on a wide area basis as it is not needed.

    Virtual LANs (VLANs): An additional capability to Ethernet was defined by the IEEE 802.1Q standard to enable multiple Ethernet segments in an enterprise to be bridged or interconnected sharing the same physical coaxial cable or fibre while keeping each bridge private. VLANs are focused on single administrative domain where all equipment configurations are planned and managed by a single entity. What is know as Q-in-Q (VLAN stacking) emerged as the de facto technique for preserving customer VLAN settings and providing transparency across a provider network.

    IEEE 802.1ad (Provider Bridges) is an amendment to IEEE 802.1Q-1998 standard that the definition of Ethernet frames with multiple VLAN tags.

    Ethernet in the First Mile (EFM): In June, 2004, the IEEE approved a formal specification developed by its IEEE 802.3ah task force. EFM focuses on standardising a number of aspects that will help Ethernet from a network access perspective. In particular it aims to provide a single global standard enabling complete interoperability of services. The standards activity encompasses: EFM over fibre, EFM over copper, EFM over passive optical network and Ethernet First Mile Operation, Administration, and Maintenance. Combined with whatever technology a carrier deploys to carry Ethernet over its core network, EFS enables full end-to-end Ethernet wide area services to be offered.

    Over Dense Wave Division Multiplex (DWDM) optical networks

    10GbE: The 10Gbit/s Ethernet standard was published in 2006 and offers full duplex capability by dropping CSMA/CD. 10GbE can be delivered over carrier’s DWDM optical network.

    Over SONET / SDH

    Ethernet over Sonet / SDH (EoS): For those carriers that have deployed SONET / SDH networks to support their traditional voice and TDM data services, EoS is a natural service to offer following a keep it simple approach as it does not involve tunnelling as would be needed using IP/MPLS as the transmission medium. Ethernet frames are encapsulated into SDH Virtual Containers. This technology is often preferred by customers as it does not involve the transmission of Ethernet via encapsulation over an IP or MPLS shared network which is often seen as a perceived performance or security risk by enterprises (I always see this as a non-logical concern as ALL public networks use shared networks at all levels).

    Link Access Procedure – SDH (LAPS): LAPS is a variant of the original LAP protocol, is an encapsulation scheme for Ethernet over SONET/SDH. LAPS provides a point-to-point connectionless service over SONET/SDH. and enables the encapsulation of IP and Ethernet data.

    Over IP and MPLS:

    Layer 2 Tunnelling Protocol (L2TP). L2TP was originally standardised in 1999 but an updated version was published in 2005- L2TPv3. L2TP is a Layer 2 data-link protocol that enables data link protocols to be carried on IP networks along side PPP. This includes Ethernet, frame relay and ATM. L2TPv3 is essentially a point-to-point tunnelling protocol that is used to interconnect single- domain enterprise sites.

    L2TPv3 is also known as a Virtual Private Wire service (VPWS) and is aimed at native IP networks. As it is a pseudowire technology it is grouped with Any Transport over MPLS (AToM).

    Layer 2 MPLS VPN (L2VPN): Customer’s networks are separated from each other on a shared MPLS network using MPLS Label Distribution Protocol (LDP) to set up point-to-point Pseudo Wire Ethernet links. The picture below shows individual customer sites that are relatively near to each other connected by L2TPv3 or L2VPN tunnelling technology based on MPLS Label Switched paths.

    Virtual Private LAN Service (VPLS): A VPLS is a method of providing a fully meshed multipoint wide area Ethernet service using Pseudo Wire tunnelling technology. VLPS is a Virtual Private Network (VPN) that enables all LANs on a customer’s premises connected to it are able to communicate with each other. A new carrier that has invested in an MPLS network rather than an SDH / SONET core network would use VPLS to offer Ethernet VPNs to their customers. The picture below shows a VPLS with LSP link containing multiple MPLS Pseudo-Wires tunnels.

    MEF: The MEF defines several types of Virtual Private Wire Services (VPWS) services:

    Ethernet Private Line (EPL). An EPL service supports a single Ethernet VC (EVC) between two customer sites.

    Ethernet Virtual Private Line (EVPL). An EVPL service supports multiple EVCs between two two customer sites.

    Virtual Private Line Service (VPLS) or Ethernet LAN (E-LAN) service supports multiple EVCs between multiple customer sites.

    These MEF-created service definitions, which are not standards as such (indeed they are independent of standards), enable equipment vendors and service providers to achieve 3rd certification for their products.

    Looking forward:

    100GbE: In 2006, the IEEE’s Higher Speed Study Group (HSSG), tasked with exploring what Ethernet’s next speed might be, voted to pursue 100G Ethernet over other offerings, such as 40Gbit/s Ethernet to be delivered in the 2009 /10 time frame. The IEEE will work to standardize 100G Ethernet over distances as far as 6 miles over single-mode fiber optic cabling and 328 feet over multimode fibre.

    PBT or PBB-TE: PBT is a group of enhancements to Ethernet that are defined in the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) group. I’ve covered this in Ethernet goes carrier grade with PBT / PBB-TE?

    T-MPLS: -MPLS is a recent derivative of MPLS – I have covered this in PBB-TE / PBT or will it be T-MPLS?

    Well, I hope I’ve covered most of the Ethernet wide area transmission standards activities here. If I havn’t I’ll add others as addendums. At least they are all on one page!

    Video history of Ethernet by Bob Metcalfe, Inventor of Ethernet

    March 22, 2007

    The History of Ethernet

    The Evolution of Ethernet to a Carrier Class Technology

    The story of Ethernet goes back some 32 years to May 22, 1973. We had early Internet access. We wanted all of our PC’s to be able to access the Internet at high speed. So we came up with the Ethernet…

    Ethernet goes carrier grade with PBT / PBB-TE (PBBTE)?

    March 13, 2007

    Alongside IP, Ethernet was always one of the ‘chosen’ protocols as it dominated enterprise Local Area Networks Network (LANs). I didn’t actually write about Ethernet back in the early 1990s because, at the time, it did not have a role in the Wide Area Network (WAN) space as a public data service protocol. As I mentioned in my post – The demise of ATM, it was assumed by many in the mid-1990s that Ethernet had reached the end of its life (NOT by users of Ethernet I might add!) and that ATM was the strategic direction for LAN protocols. This was so much assumed by the equipment vendor industry, that they spent vast amounts of money building ATM divisions to supply this perceived burgeoning market. But, this was just not to be (Picture credit: Nortel).

    The principle reason for this was that the ATM activists failed to understand that attempting to displace a well-understood and trusted technology with an unknown and new variety was a challenge too far. Moreover, ATM was so different that it would have required a complete replacement of not only 100% of LAN related network equipment but much of the personal computer hardware and software as well.

    What was ATM supposed to bring to the LAN community? A promise of complete compatibility between LANs and WANs and an increase in the speed of IEEE 802.3 10mbit/s LAN backbones that was so desperately needed at the time.

    ATM and Ethernet battled it out in the LAN public arena for only a short time as it was such a one sided battle. With the arrival of the 100mbit/s standard 100 BASE-T Ethernet standard, the Keep It Simple Stupid (KISS) approach gained dominance in the minds of users yet again. Why would you throw out years of investment for a promise of ATM jam tomorrow? Upgrading Ethernet LAN backbones to 100mbit/s was so simple as it only necessitated swapping out LAN interface cards and upgrading to better coaxial cable. Most importantly, there was no major requirements to update network software or applications. It was so much a non-brainer that it is now hard to see why ATM was ever thought to be the path forward.

    This enigma lives on today and is it caused by the difference in views between the private and the public network industries. ATM came out of the public WAN network word, whereas IP and Ethernet came out of the private LAN world. Chalk and cheese in reality.

    Once the dust had settled on the battles between ATM and Ethernet in the LAN market, the battle moved to the telecommunications WAN space and into the telco’s home territory.

    Ethernet moves into the WAN space

    I first heard about Ethernet being proposed as a wide area protocol in mid-1990s when I visited Nortel’s R&D labs in Canada. It was one of those moments I still remember quite well. It was in a small back room. I don’t remember any equipment being on display but if I remember correctly, all that was being shown were some blown-up diagrams of a proposed public Ethernet architecture. Now I would not claim that Nortel invented the idea of Ethernet’s use in the public sphere as I’m sure that if I had visited 3COM’s R/D centre (the home of Ethernet) or other vendorss labs, I would have seen similar ideas being articulated.

    The thought of Ethernet being used in WANs had not occurred to me before that moment and it really made me sit back and think (maybe I should have acted!). However, if Ethernet is ever going to be a credible wide area layer-2 network transport protocol it needs to be able to transparently transport any of the major protocols used in a converged network. This capability is provided in IP through the use of pseudowires and MPLS. This is where PBB-TE comes to the fore.

    Over the next few years a whole new sector of the telecommunications industry started; this was termed Metropolitan Access Networks (MANs) or more simply Metro Networks. In the latter half of the 1990s – the heyday of carrier start-ups – many new telcos were set up using a new architecture paradigm based on Ethernet over optical fibre. The idea was that metro networks would sit between enterprise networks and traditional wide-area telco networks. Metro networks would inter-connect enterprise offices at the city level, aggregate data traffic that needs to be transported long distances and deliver it to telcos who could ship that traffic as required to other cities or countries using frame relay services.

    Many of these metro players ceased trading during the recent challenging years but many were refinanced and were resurrected to live again.

    (Picture credit: exponential-e) It is very interesting to note that throughout that phase of the industry and even up to today, Ethernet has not gained the full acceptance as a viable public service protocol from the wider telecommunications industry. Until a few years ago, outside of specialist metro players, very few traditional carriers offered wide offered Ethernet services at all. This was quite amazing when it is considered that it flew in the face of strong requests from enterprises who all wanted them. What turned this round to a degree was the deployment of MPLS backbones by carriers who could then offer Ethernet as an access protocol to enterprises but handle Ethernet services on their network through an MPLS tunnel.

    Bringing ourselves up to the present time, traditional carriers are starting to offer layer-2 Ethernet-based Virtual Private Networks (VPNs) in parallel to their offerings of layer-3 MPLS-based IP-VPNs. One of the interesting companies that focused on Ethernet in the UK is exponential-e who I will be writing about soon

    The wide area protocol battle restarts

    What is of overt interest today, is the still open issue of Ethernet over fibre being a real alternative to a core architecture based on MPLS. It could be said that this battle is in the process of really starting today but it has been very slow in coming. Ethernet LAN <-> Ethernet WAN <-> Ethernet LAN would seem to be a natural and simple option.

    Ethernet is dominant in carrier’s customers LAN networks, Ethernet is dominant in the metro networks, but Ethernet has had little impact in WAN networks. Partially, this is to do with technology politics and a distinct reluctance of traditional carriers to offer Ethernet services (in spite of insistent siren calls from their customers). It could be said that the IP and MPLS bandwagon has taken all the money, time and strategy efforts of the telcos leaving Ethernet in the wings.

    It should be said that this also has to do with the technical challenges associated with delivering Ethernet services over longer distances than those seen in metro networks. This is where a new initiative called Provider Backbone Transport (PBT) comes into play which could provide a real alternative to the almost-by-default-use-of-MPLS core strategy of most of the traditional carriers. Interestingly, reminding me of my visit to Canada, Nortel along with Siemens are one of the key players in this market. Here is an interesting presentation Highly Scalable Ethernets by Paul Bottorff, Chief Architect, Carrier Ethernet, Nortel.

    PBT is a group of enhancements to Ethernet that are defined in the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) – phew, that’s a mouthful.

    PBBTE is all about separating the Ethernet service layer from the network layer thus enabling the development carrier-grade public Ethernet services such as outlined by Nortel:

    • Traffic engineering and hard QoS: Provider Backbone transport enables service provider’s to traffic engineer their Ethernet networks. PBT tunnels reserve appropriate bandwidth and support the provisioned QoS metrics that guarantee SLAs will be met without having to overprovision network capacity.
    • Flexible range of service options: PBT supports multiplexing of any service inside PBT tunnels – including both Ethernet and MPLS services. This flexibility allows service providers to deliver native Ethernet initially and MPLS-based services (VPWS, VPLS) if and when they require.
    • Protection: PBT not only allows the service provider to provision a point-to-point Ethernet tunnel, but to provision an additional backup tunnel to provide resiliency. In combination with IEEE 802.1ag these working and protection paths enable PBT to provide sub 50 ms recovery.
    • Scalability: By turning off MAC learning features we remove the undesirable broadcast functionality that creates MAC flooding and limits the size of the network. Additionally PBT offers a full 60-bit addressing scheme that enables virtually limitless numbers of tunnels to be set-up in the service provider network.
    • Service management: Because each packet is self identifying, the network knows both the source and destination address in addition to the route – enabling a enables more effective alarm correlation, service-fault correlation and service-performance correlation.

    Ethernet has a long 30-year history since it was invented by Robert Metcalfe in the early 1970s. It went on to dominate enterprise networking and intra-city local networking and only wide area networks are holding out. MPLS has taken centre-stage as the ATM replacement technology providing the level of Quality of Service (QoS) needed by today’s multimedia services. But MPLS is expensive and challenging to operate on a large scale and maybe Ethernet can catch it up with the long-overdue enhancements brought about by PBBTE.

    In Where now frame relay? I talked about how frame relay took off once X.25 was cleared of its cumbersome overheads that made it expensive to use as a wide area protocol. PBBTE / PBT could achieve a similar result for Ethernet. Maybe this initiative will clear the current log jam that will enable Ethernet to take up its rightful position in the public service space along side MPLS. The battle could be far from over!

    The big benefit at the end of the day is that PBB-TE is a layer-2 technology, so a carrier deploying it would not need to buy additional layer-3 infrastructure and OSS in the form of routers and MPLS thu, hopefully, representing a significant cost saving for roll-out.

    Addendum: BT have recently committed to deploy Nortel’s PBT technology (presentation). As can be read in the Light Reading analysis:

    [BT] is keen to play down any PBT versus MPLS positioning. He says the deployment at BT shows how PBT and MPLS can co-exist. “It’s not a case of PBT versus MPLS. PBT will be deployed in the metro core and interface back into BT’s MPLS backbone network.”

    It’s quite easy to understand why BT would want to say this!

    Addendum #1: A competitor to PBT / PBB-TE is TMPLS – see my post
    Addendum #2: Marketing Presentations of the Metro Ethernet Forum

    Addendum: One of the principle industry groups promoting and supporting carrier grade Ethernet is the Metro Ethernet Forum (MEF) and in 2006 they introduced their official certification programme. The certification is currently only availably to MEF members – both equipment manufacturers and carriers – to certify that their products comply with the MEF’s carrier Ethernet technical specifications. There are two levels of certification:

    MEF 9 is a service-oriented test specification that tests conformance of Ethernet Services at the UNI inter-connect where the Subscriber and Service Provider networks meet. This represents a good safeguard for customers that the Ethernet service they are going to buy will work! Presentation or High bandwidth stream overview

    MEF 14: Is a new level of certification that looks at Hard QoS which is a very important aspect of service delivery not covered in MEF 9. MEF 14 provides a hard QoS backed by Service Level Specifications for Carrier Ethernet Services and hard QoS guarantees on Carrier Ethernet business services to their corporate customers and guarantees for triple play data/voice/video services for carriers. Presentation.

    Addendum: Enabling PBB-TE – MPLS seamless services

    SONET – SDH, the great survivors

    March 8, 2007

    When I first wrote about Synchronous Digital Hierarchy (SDH) and SONET (SDH is the European version SONET) back in 1992, it was seen to be truly transformational for the network service provider industry. It marked a clear boundary from just continually enhancing an old asynchronous technology belatedly called Plesiochronous Digital Hierarchy (PDH) to a new approach that could better utilise and manage the ever increasing bandwidths then becoming available through the use of optical fibre. An up-to-date overview of SDH / SONET technology can be found in Wikipedia.

    SONET was initially developed in the USA and adapted to the rest of world a little later which called SDH. This was needed as the rest of the world used different data rates to those used in the USA – this later caused interesting inter-connect issues when connecting SONET to SDH networks. For the sake of this post, I will only use the term SDH from now on as, by installation base, SDH far outweighs SONET.

    Probably even more amazing was that when it was launched, following many years of standardisation efforts, it was widely predicted that along with ATM it would become a major transmission technology. It has achieved just that. Although ATM hit the end stop pretty quickly and the dominance of IP was unforeseen at that time, SDH and SONET went on to be deployed by almost all carriers that offered traditional Public Switched Telephone Network (PSTN ) voice services.

    The benefits that were used to justify rollout of synchronous networking at the time pretty much panned out in practice.

    • Clock rates tightly synchronised within a network through the use of atomic clocks
    • Synchronisation enabled easier network inter-connect between carriers
    • Considerably simplified and reduced costs of extracting low data rate channels from high-data rate backbone fibre optic cables
    • Considerable reduction in management costs and overheads compared to PDH systems.

    In the late 1990s, as SDH came out of the telecommunications world rather than the IT world, it was often considered to be a legacy technology along with ATM. This was driven by the fact that SDH was a Time Division Multiplexed (TDM) based protocol with its roots deeply embedded in the voice world whereas the new IP driven data world was packet based.

    In reality, carriers had by this time had made hefty commitments to SDH and they were not about to throw that money away as they had done with ATM. What carriers wanted was to have a network infrastructure that could deliver both tradition TDM based voice and data services along with the newer packet based services i.e. a true multi-service network. In many ways SDH has been a technology that has not only survived the IP onslaught but will be around for many years to come. It will certainly be very hard to displace.

    From a layer perspective, IP packets are now generally delivered using an MPLS infrastructure that was put in place to replace ATM switching. MPLS sits on top of SDH, which in turn sits on top of Dense Wave Division Multiplexing (DWDM) optical fibre. DWDM will be the subject of a future post.

    One interesting aspect of all this. is that quite a few carriers that started up in the late 1990s (many didn’t survive the telecommunications implosion) looked to a future packet-based world and did not wish to provide traditional TDM-based voice and data services. To this breed of carrier, the deployment of SDH did not seem in any way sensible and they looked to remove this seemingly redundant layer from their architecture by building a network where MPLS sat straight on top of DWDM. This is a common architecture today for a green field network start-up looking to deliver legacy voice and data services purely using an IP network.

    A number of ‘improved’ SDH alternatives sprang up in the late 1990s. The most visible one being Cisco’s Dynamic Packet Transport (DPT) / resilient packet ring (RPR) technology. To quote Cisco at the time:

    DPT is a Cisco-developed, IP+Optical innovation which combines the intelligence of IP with the bandwidth efficiencies of optical rings. By connecting IP directly to fiber, DPT eliminates unnecessary equipment layers thus enabling service providers to optimize their networks for IP traffic with maximum efficiencies.

    DPT never really caught on with carriers for a variety of technical and political reasons.

    Another European initiative came from a small start-up in Sweden at the time – net insight. This was called Dynamic Synchronous Transfer Mode (DTM). To quote net insight at the time:

    DTM combines the advantages of guaranteed throughput, channel isolation, and inherent QoS found in SDH/SONET with the flexibility found in packet-based networks such as ATM and Gigabit Ethernet. DTM, first conceived in 1985 at Ericsson and developed by a team of network researchers including the three founders of Net Insight, uses innovative yet simple variable bandwidth channels.

    Again, DTM failed to gain market traction.

    SDH has a massive installed base in 2007 and continues to grow at an albeit steady pace. For those carriers that have already deployed SDH, it is pretty much of a no-brainer to carry on using it, while new carriers who focus on all services being delivered on a converged IP network, would never deploy SDH.

    SDH has always managed to keep up with the exploding data rates available on DWDM fibre systems so it will maintain its position in carrier networks until incumbent carriers really decide to throw everything away and build a fully converged networks based on IP. There are a lot of eyes on BT at present!

    SDH extensions

    In recent years, there have been a number of extensions to basic SDH to help it migrate to a packet oriented world:

    Generic Framing Procedure (GFP): To make SDH more packet-friendly, the ITU, ANSI, and IETF have specified standards for transporting various services such as IP, ATM and Ethernet over SONET/SDH networks. GFP is a protocol for encapsulating packets over SONET/SDH networks.

    Virtual Concatenation (VCAT): Packets in data traffic such as Packet over SONET (POS) are concatenated into larger SONET / SDH payloads to transport them more efficiently.

    Link Capacity Adjustment Scheme (LCAS): When customers’ needs for capacity change, they want the change to occur without any disruption in the service. LCAs a VCAT control mechanism, provides this capability.

    These standards have helped SDH / SONET to adapt to a packet based world which was missing in the original protocol standards of the early 1990s.

    A more detailed over view of these SDH extensions is provided by Cisco .

    At the end of the day there seems to be four core transmission technologies that lie at the core of networks: IP, MPLS, optical transport hierarchy (OTH) and, if the carrier was a traditional telco, SDH / SONET. It will be interesting to see how this pans out in the next decade. Have we reached the end game now? Are there other approaches that will start to come to the fore? What is the role of Ethernet? These are some interesting questions I will attempt to tackle in future posts.

    The follow on to this post is: Making SDH, DWDM and packet friendy

    #2 My 1993 predictions for 2003 – hah!

    March 6, 2007

    The Importance of Data and Multimedia.

    After looking at my 1992 forecasts for 2003 for Traditional Telephony: Advanced Services in #1 My 1993 predictions for 2003 – hah! let’s look at Importance of Data and Multimedia. I’ll mark the things I got right in green, things I got wrong in red and maybe right in orange.

    The public network operators have still not written down all their 64kbit/s switching networks, but all now have the capability of transporting integrated data in the form of voice, image, data, and video. At least 50% of the LAN-originated ATM packets carried by the public operators is non-voice traffic. Video traffic such as video mail, video telephone calls, and multimedia is common. Information delivered to the home, business, and individuals while on the move is managed by advanced network services integrated with customers’ equipment whether that be a simple telephone, smart telephone, PC, or PDA.

    Well it’s certainly the case that carriers have still not written off their 64kbit/s switching networks and I guess this will take several decades more to happen! Video is still not that common, but with the advent of YouTube and Joost, maybe video nirvana is just around the corner. I’m not sure that we have seen advanced network services embedded in devices either! However, on consideration maybe Wi-Fi fits the prediction rather nicely?

    Most public operators are now not only transporting video but also delivering and originating information, business video, and entertainment services. Telecommunications operators have strong alliances or joint ventures with information providers (IPs), software houses, and equipment manufacturers, as it is now realised that none by themselves can succeed or can invest sufficient skills and capital to succeed alone. Telecommunications operators have developed strong software skills to support these new businesses. Many staff, who were previously working in the computer industry, have now moved to the new sunrise companies of telecommunications.

    Telecommunication operators have not really developed strong software skills from the perspective of developing applications themselves, but they have certainly embraced integration! The last prediction was interesting as it could be said that it pertained to the Internet bubble where many telecommunications staff moved to the telecoms industry from the computing industry. However, they were forced to leave just as rapidly when the bubble burst!

    • the network should be able to store the required information
    • the network should have the capability to transfer and switch multiple media instead of just voice
    • multimedia network services need to be integrated with desktop equipment to form a seamless feature-rich application
    • rapidly changing customer requirements means that operators should be able to reduce the new product development cycle and launch products quickly and effectively, ahead of competition; being proactive instead of reactive to competitive moves would offer a considerable edge.

    Information storage on the network is pretty much of a reality when you consider on-line back-up storage services, though it has hardly been pervasive. Few are generally willing to pay for on-line storage when hard disk prices have been plummeting while their capacities have been exploding.

    But, what I think I had in mind was the storage of information in the network that was then held (and still is) on personal computers and also network-based productivity tools. This has happened in a variety of ways

    • The rise and fall of Application Service Providers in the bubble
    • The success of certain network-based such as
    • The recent interest in pushing network-based productivity tools by the large media companies such as Google and Yahoo.

    The comment about converged networks is still a theme in progress, but the integration between network-based services and desktop equipment is certainly the pretty much the norm these days with the Internet.

    This is a mixed bag of success really and seems really dated in its language. This shows just how the Internet has truly transformed our view of network based data services.

    Where now frame relay?

    February 28, 2007

    The invite to the 2007 MPLS Roadmap conference provides some interesting and timely ‘facts’ about frame relay and its current use by service providers.

    Fact #1: Carriers are rushing to move from legacy voice/data networks to an MPLS-based network for their enterprise services.

    Fact #2: Frame relay, the dominant enterprise network service of the past 10 years, runs on the carriers’ legacy networks.

    Fact #3: Some carriers will no longer respond to a frame relay RFP…and even those that do are including contract language that offers little – sometimes no – assurance that frame relay will be supported for the term of the agreement.

    Reading this, I thought it about time I brought the story of frame relay up to date. I first wrote about frame relay services (FR) an absolute age ago in 1992 when they were the very latest innovation along side SDH and ATM. The core innovation in the frame relay protocol was its simplicity and data efficiency compared to its complex predecessor – X.25 and filled the gap for an efficient protocol wide area network services. Interestingly this is the same KISS motivation that lies behind Ethernet.

    Throughout the 90s, frame relay services went from strength to strength and was the principle data service that carriers pushed to enterprises to use as the basis of their wide area networks (WANs). Mind you, there were mixed views from enterprises about whether they actually trusted frame relay services and they have the same concerns about IP-VPNs today.

    As FR is a packet based protocol running on a shared carrier infrastructure (over ATM usually), many enterprises wouldn’t touch it with a bargepole and stuck to buying basic TDM E1 / T1 bandwidth services and managed the WAN themselves. It was the financial community that principally took this view and they have probably not changed their minds even now – in with the advent of MPLS IP-VPNs. Remember, the principle traffic protocol being transferred over WANs is IP!

    Often things seem quite funny when looked at in hindsight. In The rise and maturity of MPLS, I talked about the Achilles heel in connectionless IP data services as exemplified by the Internet – lack of Class of Service capability. This can have tremendous impact when the network is carrying real time services such as voice of video traffic.

    FR services, because they were run over ATM networks actually had this capability and carriers offered several levels of service derived from the ATM bearer such as:

    • best effort (CIR=0, EIR=0) »
    • guaranteed (CIR>0, EIR=0) »
    • bandwidth on demand (CIR>0, EIR>0)

    Where CIR is Committed Information Rate and EIR is Expected Information Rate and were applied to what were known as Permanent Virtual Networks (PVCs) although towards the end of the 90s, temporary Switched Virtual Networks (SVCs) became available. Also, voice over frame relay services were launched around the same time.

    So frame relay did not suffer from the same lack as QoS as IP but still got steam rollered by the IP and MPLS bandwagon. Also, unlike IP, because frame relay was based on telecommunications standards that were defined by the ITU inter-connect standards were defined. Because of this, carriers did inter-connect frame relay services enabling the deployment of multi-carrier WANs. Again this was, and is, a weakness in MPLS as described in MPLS and the limitations of the Internet. It’s a strange world sometimes!

    One of the principle reasons for frame relay’s downfall lay with its associated bearer service, ATM. In The demise of ATM I talked about how ATM was extinguished by IP and the direct consequence of this was that the management boards of carriers were unwilling to invest any more in ATM based infrastructure. Thus FR became tarred with the same brush and bracketed as a legacy service as well.

    The second reason was the rise of MPLS and its associated wide area service, MPLS based IP-VPNs. Carriers wanted to reduce infrastructure CAPEX and OPEX by focusing on IP and MPLS based infrastructures only.

    Traffic growth by protocol (Source: Infonetics in Feb 07 Capacity Magazine)

    Where has it all ended up? Although frame relay was not a service that had so much religion associated with it as ATM and IP (as exemplified by ‘bellheads’ and ‘netheads’) it was the main data service for WANs for pretty much a decade. When, MPLS IP-VPNs, came along in the early years of this century, there seemed to be a tremendous resistance from the carrier frame relay data network folks who never believed for one moment that frame relay services were doomed.


    We are now pretty much up to date. Carriers still have much legacy frame relay equipment installed and frame relay services are being supported for their customers, but frame relay can undoubtedly can be classed as a legacy service.

    Like ATM and Ethernet, many carriers are migrating their legacy data services to being carried over their core MPLS networks encapsulated in ‘tunnels’. Thus frame relay could be considered to be just an access protocol in this new IP/MPLS world supported while customers are still using it.

    Any Transport over MPLS is what it is all about these days. Ethernet, frame relay, IP and even ATM services are all being carried over MPLS-based core networks.

    As an access service and because of its previous popularity with enterprises, frame relay will live on but it’s been several years since there has been any significant investment in frame relay service infrastructure in the majority of carriers around the world.

    Other services that did not survive the IP and MPLS onslaught in the same time-frame were Switched Multi-Megabit Data services (SMDS) and the ill-feted Novell Network Connect Services (NCS) – more on this in a future post.

    One protocol, that did not suffer but actually flourished was Ethernet whose genesis lay in local area networks like IP. Indeed, it could be said that LAN protocols have won the battles and wars with telecommunications industry defined standards.

    #1 My 1993 predictions for 2003 – hah!

    February 27, 2007

    #1 Traditional Telephony: Advanced Services

    Way back in 1993 I wrote a paper entitled Vision 2003 that dared to try and predict what the telecommunications would look like ten years in the future. I looked at ten core issues, telephony services was the first. I thought it might be fun to take a look at how much I got right and how much I got wrong! This is a cut down of the original version and I’ll mark the things I got right in green, things I got wrong in red.

    Caveats: Although it is stating the obvious, it should be remembered that nobody knows the future and, even though we have used as many external attributable market forecasts from reputable market research companies as possible to size the opportunities, they in effect, know no more than ourselves. These forecasts should not be considered as being quantitative in the strictest sense of the word, but rather as qualitative in nature. Also, there is very little by way of available forecasts out to the year 2003 and certainly even fewer that talk about network revenues. You only need look back to 1983 and see whether the phenomenal changes that happened in the computer industry were forecasted to see that forecasting is a dangerous game.

    Well I had to protect myself didn’t I?

    As far as users are concerned, the differentiation between fixed and cellular networks will have completely disappeared and both will appear as a seamless whole. Although there will be still be a small percentage of the market still using the classical two-part telephone, most customers will be using a portable phone for most of their calls. Data and video services, as well as POTs, will be key business and residential services. Voice and computer capability will be integrated together and delivered in a variety of forms such as interactive TVs, smart phones, PCs, and PDAs. The use of fixed networks is cheap, so portable phones will automatically switch across to cordless terminals in the home or PABXs in the office to access the broad band services that cannot be delivered by wireless.

    A good call on the dominance of mobile phones (It’s quant that I called them “portable phones”, I guess I was thinking of the house brick size phones of that era. The convergence of mobile and fixed phones still eludes us even in 2007 – now that really is amazing!

    Network operators have deployed intelligent networks on a network-wide basis and utilise SDH together with coherent network-wide software management to maximise quality of service and minimise cost. As all operators have employed this technology, prices are still dropping and price is still the principal differentiator on core telephony services. Most individuals have access to advanced services such as CENTREX and network based electronic secretaries that were only previously available to large organisations in the early 1990s. Because of severe competition, most services are now designed for, and delivered to, the individual rather than being targeted at the large company. All operators are in charge of their own destiny and develop service application software in-house, rather than buying it from 3rd party switch vendors.

    A real mixed bag here I think I was certainly right about the dominance of the mobile phone but way out about operators all developing their own service application software. I rabbited on for several pages about Intelligent (IN) networks bringing the the power of software to the network. This certainly happened but it didn’t lead to the plethora of new services that were all the rage at the time – electronic secretary services etc. What we really saw was a phenomenal rise in annoying services based on Automatic Call Distribution (ACD) – “Press 1 for….” then “Press n for…” and so loved by CFOs.

    Customers, whether in the office or at home, will be sending significant amounts of data around the public networks. Simple voice data streams will have disappeared to be replaced with integrated data, signalling, and voice. Video telephony is taken for granted and a significant number of individuals use this by choice. There is no cost differential between voice and video.

    All public network operators are involved in many joint ventures delivering services, products, software, information and entertainment services that were unimagined in 1993.

    Tongue in cheek, I’m going to claim that I got a good hit predicting Next Generation Networks that integrate services on a single network. Wouldn’t it have been great if I had predicted that it would all be based on IP? It was a bit too early for this at the time. Wow, did I get video telephony wrong! This product sector has not even started, let alone taken off?

    What did I really not see at all because it was way too into the future were free VoIP telephony services of course as discussed in Are voice (profits) history?

    Next: #2 Integration of Information and the Importance of Data and Multimedia