Making SDH and DWDM packet friendly

March 30, 2007

Back in 1993, I wrote about the advances taking pace in fiber optic technologies and optical amplifiers. At that time, technology development was principally concerned with improving transmission distances using optical amplifier technology and increasing data rates. These optical cables a single wavelength and hence provided provided a single data channel.Wide area traffic in the early 1990s was principally dominated by Public Switched Telephone Network (PSTN) telephony traffic as this was well before the explosion in data traffic caused by the Internet. When additional throughput was required, it was relatively simple to lay down additional fibres in a terrestrial environment. Indeed, this became standard procedure to the extent that many fibres were laid in a single pipe with only a few being used or lit as it was known. Unlit fibre strands were called dark fibre. For terrestrial networks when increasing traffic demanded additional bandwidth on a link, it was simple job to simply add additional ports the appropriate SDH equipment and light up an additional dark fibre.

Wave Division Multiplexing (Picture credit: photeon)

In undersea cables adding additional fibres to support traffic growth was not so easy so the concept of Wave Division Multiplexing (WDM) came into common usage for point to point links (the laboratory development of WDM actually went back to the 1970s). The use of WDM enabled transoceanic carriers to upgrade the bandwidths of their undersea cables without the need to lay additional cables which would cost multiple billions of Dollars.

As shown in the picture, a WDM based system uses multiple wavelengths thus multiplying the available bandwidth by the number wavelengths that could be supported. The number of wavelengths that could be used and the data rate on each wavelength were limited by the quality of the optical fibre that was being upgraded and the current state-of-the-art of the optical termination electronics. Multiplexers and de-multiplexers at either end of the cable aggregated and split the combined data into separate channels by converted to and from electrical signals.

A number of WDM technologies or architectures were standardised over time. In the early days, Course Wavelength Division Multiplexing (CWDM) was relatively proprietary in nature and meant different things to different companies. CWDM combines up to 16 wavelengths onto a single fibre and uses an ITU standard 20nm spacing between the wavelengths of 1310nm to 1610nm. With CWDM technology, since the wavelengths are relatively far apart compared to DWDM, the are generally relatively cheap.

One of the major issues at the time was that Erbium Doped Fibre Amplifiers (EDFAs) as described in optical amplifiers could not be utilised due to the wavelengths selected or the frequency stability required to be able de-multiplex the multiplexed signals

In the late 1990s there was an explosion of development activity aimed at deriving benefit of the concept of Dense Wavelength Division Multiplexing (DWDM) to be able to utilise EDFA amplifiers that operated in 1550nm window. EDFAs will amplify any number of wavelengths modulated at any data rate as long as they are within its amplification bandwidth.

DWDM combines up to 64 wavelengths onto a single fibre and uses an ITU standard that specifies 100GHz or 200GHz spacing between the wavelengths, arranged in several bands around 1500-1600nm. With DWDM technology, the wavelengths are close together than used in CWDM, resulting in the multiplexing equipment being more complex and expensive than CWDM. However, DWDM allowed a much higher density of wavelengths and enabled longer distances to be covered through the use of EDFAs. DWDM systems were developed that could deliver tens of Terabits of data over a single fibre using up to 40 or 80 simultaneous wavelengths e.g. Lucent 1998.

I wouldn’t claim to be an expert in the subject, but I would expect that in dense urban environments or over longer runs where access is available to the fibre runs, it is considerably cheaper to install additional runs of fibre than to install expensive DWDM systems. An exception to this would be a carrier installing cables across a continent. If dark fibre is available then it’s an even simpler decision.

Although considerable advances were taking place at optical transport with the advent of DWDM systems, existing SONET and SDH standards of the time were limited to working with a single wavelength per fibre and were also limited to working with single optical links in the physical layer. SDH could cope with astounding data rates on a single wavelengths, but could not be used with emerging DWDM optical equipment.

Optical Transport Hierarchy

This major deficiency in SDH / SONET led to further standards development initiatives to bring it “up to date”. These are known as the Optical Transport Network (OTN) working in an Optical Transport Hierarchy (OTH) world. OTH is the same nomenclature as used for PDH and SDH networks.

The ITU-T G.709 (released between 1999 – 2003) standard Interfaces for the OTN is a standardised set of methods for transporting wavelengths in a DWDM optical network that allows the use of completely optical switches known as Optical Cross Connects that does not require expensive optical-electrical-optical conversions. In effect G.709 provides a service abstraction layer between services such as standard SDH, IP, MPLS or Ethernet and the physical DWDM optical transport layer. This capability is also known as OTN/WDH in a similar way that the term IP/MPLS is used. Optical signals with bit rates of 2.5, 10, and 40 Gbits/s were standardised in G.709 (G.709 overview presentation) (G.709 tutorial).

The functionality added to SDH in G.709 is:

  • Management of optical channels in the optical domain
  • Forward error correction (FEC) to improve error performance and enable longer optical spans
  • Provides standard methods for managing end to end optical wavelengths

Other SDH extensions to bring SDH up to date and make it ‘packet friendly’

Almost in parallel with the development of G.709 standards a number of other extensions were made to SDH to make it more packet friendly.

Generic Framing Procedure (GFP): The ITU, ANSI, and IETF have specified standards for transporting various services such as IP, ATM and Ethernet over SONET/SDH networks. GFP is a protocol for encapsulating packets over SONET/SDH networks.

Virtual Concatenation (VCAT): Packets in data traffic such as Packet over SONET (POS) are concatenated into larger SONET / SDH payloads to transport them more efficiently.

Link Capacity Adjustment Scheme (LCAS): When customers’ needs for capacity change, they want the change to occur without any disruption in the service. LCAS a VCAT control mechanism, provides this capability.

These standards have helped SDH / SONET to adapt to an IP or Ethernet packet based world which was missing in the original protocol standards of the early 1990s.

Next Generation SDH (NG-SDH)

If a SONET or SDH network is deployed with all the extensions that make it packet friendly is deployed it is commonly called a Next Generation SDH (NG-SDH). The diagram below, shows the different ages of SDH concluding in the latest ITU standards work called T-MPLS ( I cover T-MPLS in: PBT – PBB-TE or will it be T-MPLS?

Transport Ages (Picture credit: TPACK)

Multiservice provisioning platform (MSPP)

Another term in widespread use with advanced optical networks is MSPP.

SONET / SDH equipment use what are known as add / drop multiplexers (ADMs) to insert or extract data from an optical link. Technology improvements enabled ADMs to include cross-connect functionality to manage multiple fibre rings and DWDM in a single chassis. These new devices replaced multiple legacy ADMs and also allow connections directly from Ethernet LANs to a service provider’s optical backbone. This capability was a real benefit to Metro networks sitting between enterprise LANs and long distance carriers.

There are many variant acronyms in use as there are equipment vendors:

  • Multiservice provisioning platform (MSPP): includes SDH multiplexing, sometimes with add-drop, plus Ethernet ports, sometimes packet multiplexing and switching, sometimes WDM.
  • Multiservice switching platform (MSSP): an MSPP with a large capacity for TDM switching.
  • Multiservice transport node (MSTN): an MSPP with feature-rich packet switching.
  • Multiservice access node (MSAN): an MSPP designed for customer access, largely via copper pairs carrying Digital-Subscriber Line (DSL) services.
  • Optical edge device (OED): an MSSP with no WDM functions.

This has been an interesting post in that it has brought together many of the technologies and protocols discussed in the previous posts, in particular SDH, Ethernet and MPLS and joined them to optical networks. It seem strange to say on one hand that the main justification of deploying converged Next Generation Networks (NGNs) based on IP is to simplify existing networks and hence reduce costs, but then consider the complexity and plethora of acronyms and standards associated with that!

I think there is only one area that I have not touched upon and that is the IETF’s initiative – Generalised MPLS (GMPLS) or ASON / ASTN, but that is for another day!

Addendum: Azea Networks, upgrading submarine cables


Colo crisis for the UK Internet and IT industry?

March 29, 2007

The UK Internet and IT industries are facing a real crisis that is creeping up on them at a rate of knots (Source: theColocationexchange).In the UK we often believe that we are at the forefront of innovation and the delivery of creative content services such as Web 2.0 based and IPTV, but this crisis could force many of these services to be delivered from non-UK infrastructure over the next few years.

So, what are we talking about here? It’s the availability of colocation (colo) services and what you need to pay to use them. Colocation is an area where the UK has excelled and has lead Europe for a decade, but this could be set to change over the next twelve months.

It’s no secret to anyone that hosts an Internet service that prices have gone through the roof for small companies in the last twelve months, forcing many of the smaller hosters to just shut up shop. The knock-on effects of this will have a tremendous impact on the UK Internet and IT industries as it also impacts large content providers, content distribution companies such as Akamai, telecom companies and core Internet Exchange facilities such as LINX. In other words, pretty much every company involved in the delivering Internet services and applications.

We should be worried.

Estimated London rack pricing to 2007 (Source: theColocationexchange)

The core problem is that available co-location space is not just in short supply in London, it is simply disappearing at an alarming and accelerating rate as shown in the chart below (It is even worse in Dublin). It could easily run out to anyone who does not have a deep pocket.

Estimated space availability in London area (Source: theColocationexchange)

What is causing this crisis?

Here are some of the reasons.

London’s ever increasing power as a world financial hub: According to the London web site: “London is the banking centre of the world and Europe’s main business centre. It is the headquarters of more than 100 of Europe’s 500 largest companies are in London and a quarter of the of the world’s largest financial companies have their European headquarters in London too. The London foreign exchange market is the largest in the world, with an average daily turnover of $504 billion, more than the New York and Tokyo combined.”

This has been a tremendous success for the UK and driven a phenomenal expansion of financial companies needs for data centre hosting and they have turned to 3rd party colo providers to provide these needs. In particular, the need for disaster recovery has driven them to not only expand their own in-hose capabilities but to also place infrastructure in in 3rd party facilities. Colo companies have welcomed these prestigious companies with open arms in the face of the telecomms industry meltdown post 2001.

Sarbanes-Oxley compliance: The necessity for any company that operates in the USA to comply with the onerous Sarbanes-Oxley regulations has had a tremendous impact on the need to manage and audit the capture, storage, access, and sharing of company data. In practice, more analysis and more disk storage are needed leading to more colo space requirements.

No new colo build for the last five years: As in the telecommunications world, life was very difficult for colo operators in the five years following the new millennium. Massive investments in the latter half of the 1990s was followed by pretty much zero expansion of the industry which remained effectively in stasis. One exception to this is IX Europe who are expanding their facilities around Heathrow. However, builds such as this will not have any great impact on the market overall even though it will be highly profitable for the companies expanding.

However, in the last 24 months both the telecomms and the colo industries are seeing a boom in demand and a return to a buoyant market last seen in the late 1990s (Picture credit: theColocationexchange).

Consolidation: In London particularly, there has been a large trend to consolidation and roll-up of colo facilities. A prime example of this would be Telecity, (backed by private equity finance from 3i Group, Schroders and Prudential),  who have bought Redbus and Globix in the last twenty four months. This included a number of smaller colo operators that focused on supplying smaller customers. The now larger operators have really concentrated on winning the lucrative business of corporate data centre outsourcing which is seen to be highly profitable with long contract periods.

Facility absorption: In a similar way that many telecommunications companies were sold at very low prices between post 2000,  the same trend happened in the colo industry. In particular, many of the smaller colos west of London were bought at knock down valuations by carriers, large third party systems integrators and financial houses. This effectively took that colo space permanently off the market.

Content services: There has been a tremendous infrastructure growth in the last eighteen months by the well known media and content organisations. This includes all the well known names such as Google, Yahoo and Microsoft. It also includes companies delivering newer services such as IP TV and content distribution companies such as Akamai. It could be said with justification, that this growth is only just beginning and these companies are competing directly with the financial community, enterprises and carriers for what little colo space there is left.

Carrier equipment rooms: Most carriers have their own in-house colo facilities to house their own equipment or offer colo services to their customers. Few colos have invested in increasing in these facilities in the last few years so most are now 100% full forcing carriers to go to 3rd party colos for expansion.

Instant use: When enterprises buy space today they immediately use it rather then letting it lie fallow.

How has the Colo industry reacted to this ‘Colo 2.0′ spurt of growth?

With demand going through the roof and with a limited amount of space available in London, it is clearly a seller’s market. Users of colo facilities have seen rack prices increase at an alarming rate. For example: Colocation Price Hikes at Redbus.

However, the rack price graph above does not tell the whole story as power, which used to be a small additional charge or even thrown in for free, have risen by a factor of three or even four in the last twelve months.

Colos used to focus on selling colo space solely on the basis of rack footprints. However, the new currency they use is Amperes, not square feet measured in rack footprints. This is an interesting aspect that is not commonly understood by individuals who have not had to by buy space in the last twelve months.

This is caused because colo facilities are not only capped in the amount of space they have to place racks, they also have caps on the amount of electricity that a site can take from their local power companies. Also, as a significant percentage of this input power is turned into heating the hosted equipment, colo facilities have needed to make significant investment in coolers to keep the equipment operating within their temperature specifications. They also need to invest in appropriate back-up power generators and batteries to power the site in case of a external power failure.

Colo contracts are now principally based on the amount of current the equipment consumes, not its footprint. If the equipment in a rack only takes up a small space but consumes, say 8 to 10 Amps, then the rest of the rack has to remain empty unless you are willing pay an additional full-rack’s worth of power.

If a rack owner sub-lets shelf space in a rack to a number of their customers, each one has to be monitored with individual Ammeters placed on each shelf.

One colo explains this to their customers in a rather clumsy way:

“Price Rises Explained: Why has there been another price change?

By providing additional power to a specific rack/area of the data floor, we are effectively diverting power away from other areas of the facility, thus making that area unusable. In effect, every 8amps of additional power is an additional rack footprint.

The price increase reflects the real cost to the business of the additional power. Even with the increase, the cost per additional 8amps of power is still substantially less, almost half the cost of the standard price for a rack foot print including up to 8amp consumption.”

Another point to bear in mind here is the current that individual servers consume. With the sophisticated power control that is embedded into today’s servers – just like your home PC – there is a tremendous difference in the amount of current a server consumes in its idle state compared to full load. The amount of equipment placed in a rack is limited by the possible full load current consumption even if average consumption is less. In the case of an 8 Amp rack limit, there would also be a hard trip installed by the colo facility that turns the rack off if current reaches say 10 Amps.

If the equipment consists of standard servers or telecom switches this can be accommodated relatively easily, but if a company offers services such as interactive games or IPTV service and fills a rack with blade (card) servers, this can quite easily consume 15 to 20kW of power or 60 Amps! I’ll leave it you to undertake the commercial negotiations with your colo but take a big chequebook!

What could be the consequences of the up and coming crisis?

The empty floor as seen in the picture are long gone in colo facilities in London.

Virtual hosters caught in a trap: Clearly if a company does not own its own colo facilities but offers colo based services to their customers, it could prove to be very difficult and expensive  to guarantee access to sufficient space, sorry Amperage, to meet their customer’s growth needs in an exploding market. As in the semiconductor market where fabless companies are always the first hit in boom times, those companies that rely on 3rd party colos could have significant challenges facing them in coming months.

No low cost facilities will hit small hosting companies:  The issues raised in this post are significant ones even for multinational companies, but for small hosting companies they are killers. Many small hosting companies who supply SME customers have already put the shutters on their business as it has proved not to be cost effective to pass these additional costs on their customers.

Small Web 2.0 service companies and start-ups: The reduction in availability of  low-cost colo hosting could have a tremendous impact on small Web 2.0 service development companies where cash flow is always a problem. Many of these companies used to go to a “sell it cheap” colo but there are fewer and fewer of that can be resorted to. If small companies go to these lower cost colos then you can placing their services in potential jeopardy as the colo might have only one carrier fibre connection to the facility and if that goes down or no power back-up capabilities…

Its not so easy to access lower cost European facilities:  There is space available in some mainland European cities and at rates considerably lower than those seen in London. However, their possible use does raise some significant issues:

  • A connection needs to be paid for between the colo centres. If talking about multi Gbit/s bandwidths these does not come cheap. They also need to be backed up by at least a second link for resilience.
  • For real time applications – games or synchronous backup, the addition transit delays can prove to be a significant issue.
  • Companies will need local personal to support your facility and this can be very expensive and also represents an expensive and long term commitment  in many European counties.

I called this post a Crisis for the UK Internet and IT industry? I hope that the issues outlined do not have a measurable negative impact in the UK, but I’m really not sure that that will be the case. Even if there is a rush to build new facilities in 2007 it will take 18 to 24 months for them to come on line. If this trend continues, a lot of application and content service providers will be forced to provide them from Europe or the USA with a consequent set of knock-on effects for the UK.

I hope I’m wrong.

Note: I would like to acknowledge a presentation given by Tim Anker of the theColocationexchange at the Data Centres Europe conference in March 2007 which provided the inspiration for this post. Tim has been concerned about this issues and has been bringing to the attention of companies for the last two years.


Ethernet-over-everything

March 26, 2007

And you thought Ethernet was simple! It seems I am following a little bit of an Ethernet theme at the moment, so I thought that I would have a go at listing all (many?) of the ways Ethernet packets can be moved from from one location to another. Personally I’ve always found this confusing as there seems to be a plethora of acronyms and standards. I will not cover wireless standards in this post.

Like IP (Internet protocol not Intellectual Property!), the characteristics of an Ethernet connection is only as good as the bearer service, it is being carried over and thus most of the standards are concerned with that aspect. Of course, IP is most often carried over Ethernet so performance characteristics of the Ethernet data path bleed through to IP as well. Aspects such as service resilience and Quality of Service (QoS) are particularly important.

Here are the ways that I have come across to transport Ethernet.

Native Ethernet

Native Ethernet in its original definition runs over twisted-pair, coaxial cables or fibre (Even though Metcalfe called their cables The Ether). A core feature called carrier sense multiple access with collision detection (CSMA/CD) enabled multiple computers to share the same transmission medium. Essentially this works by a node resending a packet when it did not arrive at its destination because it was lost by colliding with a packet sent from another node at the same time. This is one of the principle aspect of native Ethernet that is when used on a wide area basis as it is not needed.

Virtual LANs (VLANs): An additional capability to Ethernet was defined by the IEEE 802.1Q standard to enable multiple Ethernet segments in an enterprise to be bridged or interconnected sharing the same physical coaxial cable or fibre while keeping each bridge private. VLANs are focused on single administrative domain where all equipment configurations are planned and managed by a single entity. What is know as Q-in-Q (VLAN stacking) emerged as the de facto technique for preserving customer VLAN settings and providing transparency across a provider network.

IEEE 802.1ad (Provider Bridges) is an amendment to IEEE 802.1Q-1998 standard that the definition of Ethernet frames with multiple VLAN tags.

Ethernet in the First Mile (EFM): In June, 2004, the IEEE approved a formal specification developed by its IEEE 802.3ah task force. EFM focuses on standardising a number of aspects that will help Ethernet from a network access perspective. In particular it aims to provide a single global standard enabling complete interoperability of services. The standards activity encompasses: EFM over fibre, EFM over copper, EFM over passive optical network and Ethernet First Mile Operation, Administration, and Maintenance. Combined with whatever technology a carrier deploys to carry Ethernet over its core network, EFS enables full end-to-end Ethernet wide area services to be offered.

Over Dense Wave Division Multiplex (DWDM) optical networks

10GbE: The 10Gbit/s Ethernet standard was published in 2006 and offers full duplex capability by dropping CSMA/CD. 10GbE can be delivered over carrier’s DWDM optical network.

Over SONET / SDH

Ethernet over Sonet / SDH (EoS): For those carriers that have deployed SONET / SDH networks to support their traditional voice and TDM data services, EoS is a natural service to offer following a keep it simple approach as it does not involve tunnelling as would be needed using IP/MPLS as the transmission medium. Ethernet frames are encapsulated into SDH Virtual Containers. This technology is often preferred by customers as it does not involve the transmission of Ethernet via encapsulation over an IP or MPLS shared network which is often seen as a perceived performance or security risk by enterprises (I always see this as a non-logical concern as ALL public networks use shared networks at all levels).

Link Access Procedure – SDH (LAPS): LAPS is a variant of the original LAP protocol, is an encapsulation scheme for Ethernet over SONET/SDH. LAPS provides a point-to-point connectionless service over SONET/SDH. and enables the encapsulation of IP and Ethernet data.

Over IP and MPLS:

Layer 2 Tunnelling Protocol (L2TP). L2TP was originally standardised in 1999 but an updated version was published in 2005- L2TPv3. L2TP is a Layer 2 data-link protocol that enables data link protocols to be carried on IP networks along side PPP. This includes Ethernet, frame relay and ATM. L2TPv3 is essentially a point-to-point tunnelling protocol that is used to interconnect single- domain enterprise sites.

L2TPv3 is also known as a Virtual Private Wire service (VPWS) and is aimed at native IP networks. As it is a pseudowire technology it is grouped with Any Transport over MPLS (AToM).

Layer 2 MPLS VPN (L2VPN): Customer’s networks are separated from each other on a shared MPLS network using MPLS Label Distribution Protocol (LDP) to set up point-to-point Pseudo Wire Ethernet links. The picture below shows individual customer sites that are relatively near to each other connected by L2TPv3 or L2VPN tunnelling technology based on MPLS Label Switched paths.

Virtual Private LAN Service (VPLS): A VPLS is a method of providing a fully meshed multipoint wide area Ethernet service using Pseudo Wire tunnelling technology. VLPS is a Virtual Private Network (VPN) that enables all LANs on a customer’s premises connected to it are able to communicate with each other. A new carrier that has invested in an MPLS network rather than an SDH / SONET core network would use VPLS to offer Ethernet VPNs to their customers. The picture below shows a VPLS with LSP link containing multiple MPLS Pseudo-Wires tunnels.

MEF: The MEF defines several types of Virtual Private Wire Services (VPWS) services:

Ethernet Private Line (EPL). An EPL service supports a single Ethernet VC (EVC) between two customer sites.

Ethernet Virtual Private Line (EVPL). An EVPL service supports multiple EVCs between two two customer sites.

Virtual Private Line Service (VPLS) or Ethernet LAN (E-LAN) service supports multiple EVCs between multiple customer sites.

These MEF-created service definitions, which are not standards as such (indeed they are independent of standards), enable equipment vendors and service providers to achieve 3rd certification for their products.

Looking forward:

100GbE: In 2006, the IEEE’s Higher Speed Study Group (HSSG), tasked with exploring what Ethernet’s next speed might be, voted to pursue 100G Ethernet over other offerings, such as 40Gbit/s Ethernet to be delivered in the 2009 /10 time frame. The IEEE will work to standardize 100G Ethernet over distances as far as 6 miles over single-mode fiber optic cabling and 328 feet over multimode fibre.

PBT or PBB-TE: PBT is a group of enhancements to Ethernet that are defined in the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) group. I’ve covered this in Ethernet goes carrier grade with PBT / PBB-TE?

T-MPLS: -MPLS is a recent derivative of MPLS – I have covered this in PBB-TE / PBT or will it be T-MPLS?

Well, I hope I’ve covered most of the Ethernet wide area transmission standards activities here. If I havn’t I’ll add others as addendums. At least they are all on one page!


Islands of communication or isolation?

March 23, 2007

One of the fundamental tenets of the communication industry is that you need 100% compatibility between devices and services if you want to communicate. This was clearly understood when the Public Switched Telephone Network (PSTN) was dominated by local monopolies in the form of incumbent telcos. Together with the ITU, they put considerable effort into standardising all the commercial and technical aspects of running a national voice telco.

For example, the commercial settlement standards enabled telcos to share the revenue from each and every call that made use of their fixed or wireless infrastructure no matter whether the call originated, terminated or transited their geography. Technical standards included everything from compression through to transmission standards such as Synchronous Digital Hierarchy (SDH) and the basis of European mobile telephony, GSM. The IETF’s standardisation of the Internet has brought a vast portion of the world’s population on line and transformed our personal and business lives.

However, standardisation in this new century is now often driven as much by commercial businesses and business consortiums which often leads to competing solutions and standards slugging it out in the market place (e.g. PBB-TE and T-MPLS). I guess this is as it should be if you believe in free trade and enterprise. But, as mere individuals in this world of giants, these issues can cause us users real pain.

In particular, the current plethora of what I term islands of isolation means that we often unable to communicate in ways that we wish to. In the ideal world, as exemplified by the PSTN, you are able to talk to every person in the world that owns a phone as long as you know their number. Whereas, many, if not most, new media communications services we choose to use to interact with friends and colleagues are in effect closed communities that are unable to interconnect.

What are the causes these so-called islands of isolation? Here are a few examples.

Communities: There are many Internet communities including free PC-to-PC VoIP services, instant messaging services, social or business networking services or even virtual worlds. Most of these focus on building up their own 100% isolated communities. Of course, if one achieves global domination, then that becomes the de facto standard by default. But, of course, that is the objective of every Internet social network start-up!

Enterprise software: Most purveyors of proprietary enterprise software thrive on developing products that are incompatible. Lotus Notes and Outlook email systems was but one example. This is often still the case today when vendors bolt advanced features onto the basic product that are not available to anyone not using that software – presence springs to mind. This creates vendor communities of users.

Private networks: Most enterprises are rightly concerned about security and build strong protective firewalls around their employees to protect themselves from malicious activities. This means that employees of that company have full access to their own services but these are not available to anyone outside of the firewall for use on an inter-company basis. Combine this with the deployment of vendor specific enterprise software described about and you create lots of isolated enterprise communities!

Fixed network operators: It’s a very competitive world out there and telcos just love offering value-added features and services that are only offered to their customer base. Free proprietary PC-PC calls come to mind and more recently, video telephones.

Mobile operators: A classic example with wireless operators was the unwillingness to provide open Internet access and only provide what was euphemistically called ‘walled garden’ services – which are effectively closed communities.

Service incompatibilities: A perfect example of this was MMS, the supposed upgrade to SMS. Although there was a multitude of issues behind the failure of MMS, the inability to send an MMS to a friend who used another mobile network was one of the principle ones. Although this was belatedly corrected, it came too late to help.

Closed garden mentality: This idea is alive and well amongst mobile operators striving to survive. They believe that only offering approved services to their users is in their best interests. Well, no it isn’t!

Equipment vendors: Whenever a standards body defines a basic standard, equipment vendors nearly always enhance the standard feature set with ‘rich’ extensions. Of course, anyone using an extension could not work with someone who was not! The word ‘rich’ covers a multiplicity of sins.

Competitive standards: Users groups who adopt different standards become isolated from each other – the consumer and music worlds are riven by such issues.

Privacy: This is seen as such an important issue these days that many companies will not provide phone numbers or even email addresses to a caller. If you don’t know who you want, they won’t tell you! A perfect definition of a closed community!

Proprietary development:  In the absence of standards companies will develop pre-standard technologies and slug it out in the market. Other companies couldn’t care less about standards and follow a proprietary path just because they can and have the monopolistic muscle to do so. Bet – you can name one or two of those!

One take away from all this is that in the real world you can’t avoid islands of isolation and all of us have to use multiple services and technologies to interact with colleagues that are effectively islands of isolation and will probably remain so for the indefinite future in the competitive world we live in.

Your friends, family and work colleagues, by their own choice, geography and lifestyle, probably use a completely different set of services to yourself. You may use MSN, while colleagues use AOL or Yahoo Messenger. You may choose Skype but another colleague may use BT Softphone.

There are partial attempts at solving these issues with a subset of islands, but overall this remains a major conundrum that limits our ability to communicate at any time, any place and any where. The cynic in me says that if you hear about any product or initiative that relies on these islands of isolation disappearing to succeed I would run a mile – no ten miles! On the other hand, it could be seen as the land of opportunity?


Video history of Ethernet by Bob Metcalfe, Inventor of Ethernet

March 22, 2007

The History of Ethernet

The Evolution of Ethernet to a Carrier Class Technology

The story of Ethernet goes back some 32 years to May 22, 1973. We had early Internet access. We wanted all of our PC’s to be able to access the Internet at high speed. So we came up with the Ethernet…


PBB-TE (PBBTE) / PBT or will it be T-MPLS (MPLS-TP)?

March 22, 2007

Welcome to more acronym hell. In Ethernet goes carrier grade with PBT? I looked at the history of Ethernet and its increasing use in wide area networks. A key aspect of that was for standardisation bodies to provide the additional capabilities to make Ethernet ‘carrier grade’ by creating a connection-oriented Ethernet with improved scalability, reliability and simplified management. (Picture credit: Alcatel)As is the want of the network industry, not only are commercial network equipment manufacturers extremely competitive (through necessity) but the the standards bodies are as well. So we have not only the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) but also the ITU‘s Transport-MPLS (T-MPLS) activities competing in a similar space. An ITU technical overview presentation can be found here.

T-MPLS is a recent derivative of MPLS and is being defined in cooperation with the IETF. One way of looking at this could be through the now confused term layers. IP clearly operates at layer-3 while MPLS itself has been said to operate at layer-2.5 as it does not operate at a layer-2 transport level. While T-MPLS has been specifically designed to operate at the layer-2 transport level an area that the ITU focus on.

I guess the simplest way of looking at this is that T-MPLS is a stripped down MPLS standard whereby irrelevant components concerned with MPLS’ support of IP connectionless service capabilities. T-MPLS is also based on the extensive standardisation work already undertaken and implemented in SDH / SONET thus representing a marriage of MPLS and SDH. The main components of T-MPLS were approved as recenly as November 2006.

The motivation behind T-MPLS is is to provide a compatible transport network that is able to appropriately support the needs of a fully converged NGN IP network while at the same time support the on-going technology enhancements taking place in the optical layer. Because MPLS has gained such popularity in the last few years, it only seems natural to enhance something was accepted, understood and more importantly is now deployed by most carriers.

So what is T-MPLS?

T-MPLS operates at the layer-2 data plane level i.e. underneath MPLS or IP/MPLS. It borrows many of the characteristics and capabilities of IETF’s MPLS but focuses on the additional aspects that address the need for any transport layer to provide what is known as high availability i.e. greater than 99.999%. Some of the additions are around the following areas:

  • Clear management and control of bandwidth allocation using MPLS’s Label Switched Paths (LSPs)
  • Improved control of a the transport layer’s operational state through SDH-like OA&M (operations, administration, and maintenance) used for administering, and maintaining the network.
  • Improved and new network an survivability mechanisms such as protection and restoration as seen in SDH

Another key aspect, as seen in PBB-TE, is the complete separation of the control and data planes creating full flexibility for network management and signaling that will take place in the control plane. This signalling is known as generalised multi-protocol label switching (GMPLS) (also known as Automatic switched-transport network [ASTN]) and provided the same capabilities as seen in the tools used today to manage optical networks.

T-MPLS has been designed to run over an optical transport hierarchy (OTH) or an SDH / SONET network. I assume the OTH terminology has been adopted retrospectively to make it compatible with SDH in the same way Plesiochronous Digital Hierarchy (PDH) was invented to cover pre-SDH technology 15 years ago. The reason for SDH / SONET support is clear as these transport technologies are not about to go away after the significant investments made by carriers over the last 15 years. We should also not forget that the mobile world is still mainly a time division multiplexed (TDM) world and SDH / SONET provides a bridge for those carriers in both market areas or interconnecting with mobile operators. (Picture credit: Alcatel)

Although T-MPLS has been defined as a generic transport standard, its early focus is clearly centred on Ethernet bringing it into potential competition with PBT / PBB-TE. It will be most interesting to see how this competition pans out.

What are differences between T-MPLS and MPLS?

Although T-MPLS is a subset of MPLS there are several enhancements. The principle on of these is T-MPLS’ ability to support bi-directional LSPs. MPLS LSPs are unidirectional and both the forward and backward paths between nodes need to be explicitly defined. Conventional transport paths are bi-directional.

The following MPLS features have been dropped in T-MPLS:

  • Equal Cost Multiple Path (ECMP): Traffic can follow two paths that have the same ‘cost’ and is not needed in a connection oriented optical world. This is a problem in MPLS as well as it provides an element of non-predictability that affects traffic engineering activities.
  • LSP merging option: This is a real problem in MPLS anyway as traffic can be merged from multiple LSPs into a single one and losing source information. Point-2-Multi-point (P2MP) are particularly badly affected.
  • Penultimate Hop Popping (PHP): Labels are removed one node before the egress node to reduce the processing power required on the egress node. A legacy issue caused by underpowered routers that is not of concern today.

There are many other issues that need to be resolved before T-MPLS can become a robust standard set that is ready for wide scale deployment:

  • Interoperability: Interoperability between MPLS and T-MPLS control planes. There is lots of issues in this space and most activities are at an early stage.
  • Application interface: One of the main reasons ATM failed was that the majority of applications needed to be adapted to utilise ATM and it just did not happen. Although the problem with T-MPLS is limited to management tool APIs and interfaces, there are a lot of software companies that will need to undertake a lot of work to support T-MPLS. This will be quite a challenge!

T-MPLS’ vision is similar to that of PBB-TE and encompasses high scalability, reduced OPEX costs, handle any packet service, strong security, high availability, high QoS, simple management, and high resiliency.

The drive to T-MPLS has not only been driven by the need to upgrade optical network management but also by the the realisation that traditional MPLS- based networks have inherited the IP characteristics of being expensive to manage from an OPEX perspective and very difficult to manage on a large scale.

This has made most carriers rather jittery. On one hand they need to follow the industry gestalt of everything-over-IP based on the assumption it will all be cheaper one day as well as enabling them to provide the multiplicity of services wanted by their customers. On the other hand, the principle technology being used to deliver this vision, MPLS, is turning out to be more expensive to manage than the legacy networks MPLS is replacing. Quite a conundrum I think and one of the principle components driving the interest in KISS Ethernet services.

The diagram below shows the ages of transport ‘culminating’ in T-MPLS!

Transport Ages (Picture credit: TPACK)

A good overview of T-MPLS can be read courtesy of TPACK.
A side by side comparison of TBB-TE and T-MPLS from Meriton
Addendum: One of the principle industry groups promoting and supporting carrier grade Ethernet is the Metro Ethernet Forum (MEF) and in 2006 they introduced their official certification programme. The certification is currently only availably to MEF members – both equipment manufacturers and carriers – to certify that their products comply with the MEF’s carrier Ethernet technical specifications. There are two levels of certification:

MEF 9 is a service-oriented test specification that tests conformance of Ethernet Services at the UNI inter-connect where the Subscriber and Service Provider networks meet. This represents a good safeguard for customers that the Ethernet service they are going to buy will work! Presentation or High bandwidth stream overview

MEF 14: Is a new level of certification that looks at Hard QoS which is a very important aspect of service delivery not covered in MEF 9. MEF 14 provides a hard QoS backed by Service Level Specifications for Carrier Ethernet Services and hard QoS guarantees on Carrier Ethernet business services to their corporate customers and guarantees for triple play data/voice/video services for carriers. Presentation.

Addendum #1: Ethernet-over-everything – what’s everything?

Addendum #2: Enabling PBB-TE – MPLS seamless services


GSM pico-cell’s moment of fame

March 21, 2007

Back in May 2006, the DECT – GSM guard bands, 1781.7-1785 MHz and 1876.7-1880 MHz, originally set up to protect cordless phones from interference by GSM mobiles were made available to a number of licensees. As is the fashion, these allocations were offered to industry by holding an auction with a reserve price of £50,000 per license. In fact, it was Ofcom’s first auction and I guess they were happy with the results although I would not have liked to have been in charge of Colt’s bidding team when it came to report to their Board post the auction!

For those of a technical bent, you can see Ofcom’s technical study here in a document entitled Interference scenarios, coordination between licensees and power limits and there is a good overview of cellular networks on Wikipedia also

The most important restriction on the use of this spectrum was outlined by the study:

This analysis confirms that a low power system based on GSM pico cells operating at the 23dBm power (200mW) level can provide coverage in an example multi-storey office scenario. Two pico cells per floor would meet the coverage requirements in the example 50m × 120m office building. For a population of 300 people per floor, the two pico cells would also meet the traffic demand.

It was all done using sealed bids so there was inevitably a wide spectrum (sorry for the pun!) of responses ranging from just over £50,000 to the highest, Colt, who bid £1,513,218. The 12 companies winning licenses were:

British Telecommunications £275,112
Cable & Wireless £51,002
COLT Mobile Telecommunications £1,513,218
Cyberpress Ltd £151,999
FMS Solutions Ltd £113,000
Mapesbury Communications £76,660
O2 £209,888
Opal Telecom £155,555
PLDT £88,889
Shyam Telecom UK £101,011
Spring Mobil £50,110
Teleware £1,001,880

One company that focuses on the supply of GSM and 3G picocells is IP Access based in Cambridge. Their nanoBTS base station can be deployed in buildings, shopping centres, transport terminals, at home, underground stations, rural and remote deployments; in fact, almost anywhere – according to their web site.

Many of the bigger suppliers of GSM and 3G equipment manufacture pico cell platforms as well, Nortel for example.

Of course, even though the pico cell base station (BTS) is lower cost compared to standard base stations, that is not the end to the costs. A pico cell operator still needs to install a base station controller (BSC) which can control a number of base stations plus an home location register (HLR) which stores the current state of mobile phone in a database database. If the network needs to support roaming customers, a virtual location register (VLR) is also required. On top of this, interconnect equipment to other mobile operators is required. All this does not come cheap!

Pico GSM cells are low power versions of their big brothers and are usually associated with a lower-cost backhaul technology based on IP in place of traditional point to point microwave links. GSM Pico cell technology can be used in a number of application scenarios.

In-building use as the basis of ‘seamless’ fixed-mobile voice services. The use of mobile phones as a replacement to fixed telephones has always been a key ambition for mobile operators. But, as we all know, in-building coverage by cellular operators is often not too good leading to the necessity of taking calls near windows or on balconies. The installation of an in-building pico-cell is one way of providing this coverage and comes under the heading of fixed mobile integration. One challenge in this scenario is the possible need to manually swap SIM cards when entering or exiting the building if a different operator is used inside the building to that outside. Of course, nobody would be willing to do this physically so a whole industry has been born to support dual SIM cards which can be selected from a menu option.

From a usability and interoperability perspectives fixed-mobile integration still represents a major industry challenge. Not the least of the problems is that a swap from one operator to another could trigger roaming charges. This is probably an application area that only the bigger license winners will participate in.

On ships using satellite backhaul: This has always been an obvious application for pico cells, especially for cruise ships.

On aeroplanes: In-cabin use of mobile phones is much more contentious than use on ships and I could write a complete post on this particular subject! But, I guess this is inevitable no matter how irritating it would be to fellow passengers – no bias here! e.g. OnAir with their agreement with Airbus.

Overseas network extensions: I was interested in finding out how some of the winners of the OFCOM auction were getting on now that they held some prime spectrum in the UK so I talked with Magnus Kelly, MD at Mapesbury (MCom). I’m sure they were happy with what they paid as they were at the ‘right end’ of the price spectrum.

Mapesbury are a relatively small service provider set up in 2002 to offer data, voice and wireless connectivity services to local communities. In 2003, MCom acquired the assets of Forecourt Television Limited, also known as FTV. FTV had a network of advertising screens on a selection of petrol station forecourts, among them Texaco. This was when I first met Magnus. Using this infrastructure, they later signed a contract with T-Mobile UK, to offer a W-Fi service in selected Texaco service stations across the UK.

More recently, they opened their first IP.District, providing complete Wi-Fi coverage over Watford, Herts using 5.8GHz spectrum. MCom has had pilot users testing the service for the last 12 months.

Magnus was quite ebullient about how things were going on the pico-cell front although there were a few sighs when talking about organising and negotiating the allocation of the required number blocks and point codes necessitated by the trials that they have been running.

He emphasised that that the technology seemed to work well and the issues they now had were the same as any company in the circumstances; creating a business that made money. They have looked at a number of applications and decided that the fixed-mobile integration is probably best left to the major mobile operators.

They are enamoured by the opportunities presented by what is called Overseas network extensions. In essence, this is creating what can envisioned as ‘bubble’ extensions to non-UK mobile networks in the UK. The traffic generated in these extensions can then be backhauled using low cost IP pipes. The core value proposition is low-cost mobile telephone calls aimed at dense local clusters of people using mobile phones. For example, these could be clusters of UK immigrants who would like the ability to make low-cost calls from their mobile phones back to their home countries. Clearly in these circumstances, these pico-cell GSM bubbles would be focused on selected city suburbs as they are following the same subscriber density logic that drives Wi-Fi cell deployment.

In the large mobile operator camp, O2 announced in November 2006 that they will offer indoor, low-power GSM base stations connected via broadband as part of its fixed-mobile convergence (FMC) strategy, an approach that will let customers use standard mobile phones in the office. I’ll be writing a post about FMC in the future.

Although it is early days for GSM pico cell deployment in the UK, it looks like it could have have a healthy future, although this should not be taken for granted. There are a host of technical, commercial and regulatory and political challenges to seamless use of phones inside and outside of buildings. There are also other technology solutions – IT based rather than wireless – for reducing mobile phone bills. An example of such an innovative approach is supplied by OnRelay.


Follow

Get every new post delivered to your Inbox.