Subsea network provider Azea secures $20M in more funding

April 2, 2007

Following the commissioning of a transoceanic submarine cable upgrade, Azea Networks has secured a Series D funding round of more than $20 million led by TVM Capital and supported by its existing investors.

Mike Hynes, Azea’s chief operating officer, comments, “The successful completion of our latest project once again validates the compelling business case for upgrades whereby carriers can optimize their existing assets and offer new services.”

Azea is already backed by top-tier venture capital investors from both the US and the UK, including Accel Partners, Atlas Venture, and Quester. The company reveals that new investor TVM Capital of Munich, Germany, and Boston, MA, has led this additional funding round to provide working capital for further technical and business development and to take the company to profitability.

“We have been impressed by Azea’s experienced team and their innovative approach to solving the global telecommunications operators’ dilemma of achieving profitable growth,” says Chris Cobbold of TVM Capital, who has also joined Azea’s board of directors.

Scott White, chief executive officer of Azea, observes, “This funding round reflects the continued confidence of our existing investors, and validation from new investor TVM Capital of the submarine optical upgrade market opportunity and Azea’s potential for growth.”

I will be writing an overview of Azea in a forthcoming post.


MPLS-TE and network traffic engineering

April 2, 2007

In my post entitled The rise and maturity of MPLS I said that one of the principle reasons for a carrier to implement MPLS was the need for what is known as traffic engineering in their core IP networks. Before the advent of MPLS this capability was supplied by ATM with many of the world’s terrestrial Internet backbones being based on this technology. ATM provided a switching capability in Points of Presence(PoPs) that enabled the automatic switchover to an alternative inter-city ATM pipe in case of failure. I say ‘intercity’ because ATM was not generally implemented on a transoceanic basis because ATM was deemed to be expensive and inefficient due to its 17% overhead commonly known as cell tax (Picture: Aria networks planning software. (Presentation to the UK Network Operators Forum 2006).IP engineers were keen to remove this additional ATM layer and replace it with it a control capability which became MPLS. However MPLS in its original guise did not really ‘cut the mustard’ for use in a traffic engineered regime so the standard was enhanced through the release of extensions known as MPLS-TE. This post will look at Traffic Engineering and its sibling activity Capacity Planning and their relationship to MPLS.

Capacity planning

Capacity planning is an activity that is undertaken by all carriers on a ‘regular’ basis. Nowadays, this would most likely be undertaken on an annual basis, although in the heady days of the latter half of the 90s of heavy growth it was unlikely to be undertaken on less than a quarterly basis.

Capacity planning is an important function as it directly drives the purchase of additional network equipment or the purchase or building of addition bandwidth capacity for the physical network. Capacity planning is undertaken by the planning department and principally consists of the following activities:

Network topology: The starting point of any planning exercise is to profile the existing network to act as a benchmark to build upon. This consists of two things. The first is a complete database of the nodes or PoPs and network link bandwidths i.e. their maximum capacities. This sounds easier than it is in reality. In many instances carriers do not know the full extent of the equipment deployed which is often the result of one too many acquisitions. This discovery of assets can either be based an on-going manual spreadsheet or database exercise or software can be used to automatically discover the up to date installed network topology. Another way is the export network configurations from network equipment such as routers.

Traffic Matrices: What is needed next is detailed link resource utilisation data or traffic profiles for each service type. This is often called traffic Matrices. Links are the pipes interconnecting a network’s PoPs and their utilisation is how much of the links’ bandwidth is being used. As IP traffic is very dynamic and varies tremendously according to the time of day, good traffic engineering leads to good operational practice such as never loading a link beyond a certain percentage – say 50%. Every carrier would have their own standard which could be quite easily be made higher to save money but could risk poor network performance at peak times. Clearly, engineers and accountants have different perspectives in these discussions! (Raw IP traffic flow: Credit. Cariden.)

Demand Forecast: At this point, capacity planning engineers make a request to their product marketing and sales brethren with a request for a service sales forecast for the next planning cycle which could between one and three years. If you talk to any planning engineer I’m sure you will hear plaintive cries such as “I can’t plan unless I get a forecast” however, can you think of a worse group of individuals to get this sort of information from than sales people? I would guess that this is one of the biggest challenges planning departments face.

Once topology, current traffic matrices and forecasts for each service (IP transit, VoIP, IP VPNs IPTV etc.) has been obtained then the task of planning for the next capacity planning period can begin. This results in – or should result in a clear plan for the company that covers such issues as:

  • What existing link upgrades are required
  • What new links are required
  • What new or expansion to backup links are required
  • What new Capital Expenditure (CAPEX) is required
  • What increase in Operational Expenditure (OPEX) is required
  • What new migration or changeover plans are required
  • Lots of management reports, spreadsheets and graphs
  • Caveats about the on-going unreliability of the growth forecasts received!

Traffic engineering (TE)

While Capacity Planning is a long-term forward looking activity that is concerned with optimising network growth and performance in the face of growing service demand, traffic engineering is focused on how the network performs in respect of delivering services at a much finer granularity.

Traffic engineering in networks has a history as long as telephones have been around and was closely associated with A.K. Erlang. One of the fundamental metrics in the voice Public Switched Telephony Networks (PSTN) was named after him- the Erlang. An Erlang is a measure of the occupancy or utilisation of voice links regardless of whether traffic was flowing is not. Erlang-based calculations were / are used to calculate Quality of Service (QoS) and the optimum utilisation of fixed bandwidth links taking into account the amount of traffic at peak times.

Traffic engineering is even more important in the highly dynamic world of IP networks and carriers are able to experience a considerable number of benefits if traffic engineering is taken seriously by management.

  • Cost optimisation: Providing network links is an expensive pastime by the time you take IP equipment, optical equipment, OSS costs and OPEX into account. The more that a network is fully utilised without degradation, the more money can flow to the bottom line.
  • Congestion management: If a network is badly traffic engineered either through under-planning, under-spending or under-resourcing, the more chance there is for network problems such as outages or congestion to impact a customer’s experience. The telecoms world is stuffed full of examples of where this has happened.
  • Dynamic services and traffic profiles: Traffic profiles and flow can change quite considerably over a period of time when new services with different traffic profiles are launched without involving network planners. In an age when when there is considerable management pressure to reduce new time-to-market timescales this can happen more often that many companies would admit to.
  • Efficient routing: In MPLS and the limitations of the Internet I wrote about about one of the strengths of the IP protocol was that a packet could always find a path to its destination if one existed but that strength created problems when the service required predictable performance. Traffic engineered networks provide paths for critical services that are deterministic / predictable from a path perspective and from a Quality of Service (QoS) perspective. It would not be an overstatement to say that this is pretty much mandatory in these days of converged Next Generation networks.
  • Availability, resilience and fast restoration: If a network’s customers sees an outage at any time, the consequences can be catastrophic from a churn or brand image perspective so high Availability is a crucial network metric that needs to be monitored.. There is a tremendous difference in perceived reliability between PSTN voice network and IP networks. For example, tell me the last time your home telephone broke down? It’s not that PSTN networks are more reliable that IP networks, they’re not, it’s just that those PSTN networks have been better designed to transparently work around broken equipment or broken links. Subscribers, to use that old telephony term, are blissfully unaware of a network outage. Of course, if a digger cuts through a major fibre and the SDH backbone ring that is not actually a ring… Well, that’s another story.
  • QoS and new services: Real time services need an ability to separate latency-critical services such as VoIP from non-critical services such as email. Traffic engineering is a critical tool in achieving this.

Multi-protocol Label Switching – Traffic Engineering (MPLS-TE)

The term ‘-TE’ is used in relation to describe other services as well, notably in the attempts to make Ethernet carrier grade in quality which is discussed in my posts on PBB-TE and T-MPLS which is being built on the back of MPLS-TE (Picture credit OpNet planning software).

As mentioned above, before the advent of MPLS-TE, carriers of IP traffic relied on the underlying transport for traffic engineering – networks such as ATM. MPLS-TE consisted of a set of extensions to MPLS that enabled native traffic engineering within an MPLS environment. Of course, this does not remove the need to traffic engineer any layer-1 transport network that MPLS may be transported over. MPLS-TE was covered in in the IETF RFC 2702 standard.

What does MPLS-TE bring to the TE party?

(1) Explicit or constraint-based end-to-end routing: The picture below shows a small network where traffic flowing from the left could travel via two alternative paths to exit on the right. This was specifically the environment that Internal Gateway Routing (IGP) routing algorithms such as Open shortest Path (OSPG) and Intermediate System – Intermediate System (IS-IS) were designed to operate in by specifically routing all traffic over the shortest path as shown below (Picture credit: NANOG).

This inevitably could lead to problems with the north path shown above becoming congested while the south path remains unused wasting expensive network assets. Before MPLS-TE, standard IGP routing algorithm metrics could be ‘adjusted’, ‘manipulated’ or ‘tweaked’ to reduce this possibility, however, doing this could be very complicated and be very challenging to manage on a day-to-day basis. Such an approach usually required a network-wide plan. In other words, it is a bit of a horror to manage.

With MPLS-TE, using an extension to signalling protocol Resource Reservation Protocol (RSVP) known as, not unsurprisingly as RSVP-TE, explicit paths can be set up to force selected traffic flows to flow over through them as shown below.

This deterministic routing helps with reducing congestion on particular links, helps load the network more evenly thus reducing the number of ‘orphaned links’, ensures optimal utilisation of the network, helps planners separate latency dependent services from non-critical services and better manage upgrade costs (Picture credit: NANOG).

These paths are called TE tunnels or label switched paths (LSPs). LSPs are unidirectional so two need to be specified to handle bi-directional traffic between two nodes.

(2) Constraint Based Routing: Network planners are now able to undertake what is known as Constraint Based Routing where traffic paths can be computed (aka, Path Computation) that meets certain constraints other than the path with the least number of nodes or PoPs as drives OSPF and IS-IS. This could be links with the least utilisation, least delay, with the most free bandwidth, or links that utilise a carriers own, rather than a partner’s infrastructure .

(3) Bandwidth reservation: MPLS-TE DiffServe (DS-TE) enables per-class TE across an MPLS-TE network. Physical interfaces and TE tunnels / LSPs can be told how much bandwidth can be reserved or used. This can be used to dynamically allocate, share and adjust over time bandwidth given to critical services such as VoIP and to best effort traffic such as Internet browsing and email.

(4) Fast Re-Route (FRR): MPLS-TE supports local rerouting around a faulty node (node protection) or faulty link (Link Protection). Planners can define alternative paths to be used when failure occurs. FRR can reroute traffic in tens of milliseconds minimising down time. However, although FRR sounds like a good idea, the amount of computing effort required for calculating FRR paths for a complete network is very significant. If a carrier does not have the appropriate path computation tools, using FRR could cause significant problems by rerouting traffic non-optimally to segment of network that is already congested rather than a segment that is under-utilised (Picture: An LSP tunnel and its backup, Credit: Wandl).

There are other additions to MPLS covered by the MPLS-TE extensions but these are minor compared to ones described above.

Practical use of MPLS-TE

One would imagine that with all the benefits that would accrue to a carrier by using MPLS-TE such as enhancing service quality, enhancing new service deployment and reducing risk, carriers would flock to using MPLS-TE. However, this is not necessarily the case as, with all new technologies, there are alternatives such as:

  • Traditional over-provisioning: Traffic engineering management can be a very complicated task if you attempt to analyse all the flows within a large network. One of the traditional ways that many carriers get round this onerous and challenging task is to simply well over-provision their networks. If a network is geographically constrained or is simply simple, then throwing bandwidth at the network can be seen as a simple and unchallenging solution. Network equipment is so much cheaper than it used to be (and smaller carriers can buy equipment from eBay – cough, cough!). Dark fibre or multiGbit/s links can be bought or leased relatively cheaply as well. So why bother putting in the effort to traffic engineer a network properly?
  • The underlying network does the TE: Many carriers still use the underlying network to traffic engineer their networks. Although ATM is not around as much as it used to be, SDH still is.
  • Stick with IGP adjustment: Many carriers still stick to simple IGP metric adjustment discussed earlier as it handles the simple TE activities they require. True, many would moan about how difficult to manage this is but to migrate to an MPLS-TE environment could be seen as a costly exercise and they currently do not have the time, resource or money to undertake the transitiuon.
  • Let’s wait and see: There are so many considerations and competitive options that the easiest decision to make is to do nothing.

Round up

IP traffic engineering is a hot subject and brings forth a considerable variety of views and emotions when discussed. Many carriers have stuck with methods that they have outgrown but are hesitant about making the jump thinking that something better will come along. Many try to avoid the issue completely by simply over-provisioning and taking a an an ultra-KISS approach.

However, those carriers that are truly pursuing a converged Next Generation architecture with all services based on IP and legacy services carried by pseudowire tunnels, cannot avoid the issue of undertaking real traffic engineering to the degree undertaken by the old PSTN planning departments. To a great extent, this could be seen as the wild child of IP networks growing up to become a mature and responsible adults. The IP industry has a long way to go though as creating standards can be seen as difficult enough but getting people to use them is something else!

Whatever else, simply sitting and waiting is not the solution…

Aria Networks (UK)

the economics of network control (USA)

Making networks perform (USA)
You have one network, We have one plan (USA)

Wandl wide area design laboratory (USA)

Addendum #1: The follow-on article to this post is: Path Computation Element (PCE): IETF’s hidden jewel

Addendum #2: GMPLS and common control

Addendum #3: Aria Networks shows the optimal path


Making SDH and DWDM packet friendly

March 30, 2007

Back in 1993, I wrote about the advances taking pace in fiber optic technologies and optical amplifiers. At that time, technology development was principally concerned with improving transmission distances using optical amplifier technology and increasing data rates. These optical cables a single wavelength and hence provided provided a single data channel.Wide area traffic in the early 1990s was principally dominated by Public Switched Telephone Network (PSTN) telephony traffic as this was well before the explosion in data traffic caused by the Internet. When additional throughput was required, it was relatively simple to lay down additional fibres in a terrestrial environment. Indeed, this became standard procedure to the extent that many fibres were laid in a single pipe with only a few being used or lit as it was known. Unlit fibre strands were called dark fibre. For terrestrial networks when increasing traffic demanded additional bandwidth on a link, it was simple job to simply add additional ports the appropriate SDH equipment and light up an additional dark fibre.

Wave Division Multiplexing (Picture credit: photeon)

In undersea cables adding additional fibres to support traffic growth was not so easy so the concept of Wave Division Multiplexing (WDM) came into common usage for point to point links (the laboratory development of WDM actually went back to the 1970s). The use of WDM enabled transoceanic carriers to upgrade the bandwidths of their undersea cables without the need to lay additional cables which would cost multiple billions of Dollars.

As shown in the picture, a WDM based system uses multiple wavelengths thus multiplying the available bandwidth by the number wavelengths that could be supported. The number of wavelengths that could be used and the data rate on each wavelength were limited by the quality of the optical fibre that was being upgraded and the current state-of-the-art of the optical termination electronics. Multiplexers and de-multiplexers at either end of the cable aggregated and split the combined data into separate channels by converted to and from electrical signals.

A number of WDM technologies or architectures were standardised over time. In the early days, Course Wavelength Division Multiplexing (CWDM) was relatively proprietary in nature and meant different things to different companies. CWDM combines up to 16 wavelengths onto a single fibre and uses an ITU standard 20nm spacing between the wavelengths of 1310nm to 1610nm. With CWDM technology, since the wavelengths are relatively far apart compared to DWDM, the are generally relatively cheap.

One of the major issues at the time was that Erbium Doped Fibre Amplifiers (EDFAs) as described in optical amplifiers could not be utilised due to the wavelengths selected or the frequency stability required to be able de-multiplex the multiplexed signals

In the late 1990s there was an explosion of development activity aimed at deriving benefit of the concept of Dense Wavelength Division Multiplexing (DWDM) to be able to utilise EDFA amplifiers that operated in 1550nm window. EDFAs will amplify any number of wavelengths modulated at any data rate as long as they are within its amplification bandwidth.

DWDM combines up to 64 wavelengths onto a single fibre and uses an ITU standard that specifies 100GHz or 200GHz spacing between the wavelengths, arranged in several bands around 1500-1600nm. With DWDM technology, the wavelengths are close together than used in CWDM, resulting in the multiplexing equipment being more complex and expensive than CWDM. However, DWDM allowed a much higher density of wavelengths and enabled longer distances to be covered through the use of EDFAs. DWDM systems were developed that could deliver tens of Terabits of data over a single fibre using up to 40 or 80 simultaneous wavelengths e.g. Lucent 1998.

I wouldn’t claim to be an expert in the subject, but I would expect that in dense urban environments or over longer runs where access is available to the fibre runs, it is considerably cheaper to install additional runs of fibre than to install expensive DWDM systems. An exception to this would be a carrier installing cables across a continent. If dark fibre is available then it’s an even simpler decision.

Although considerable advances were taking place at optical transport with the advent of DWDM systems, existing SONET and SDH standards of the time were limited to working with a single wavelength per fibre and were also limited to working with single optical links in the physical layer. SDH could cope with astounding data rates on a single wavelengths, but could not be used with emerging DWDM optical equipment.

Optical Transport Hierarchy

This major deficiency in SDH / SONET led to further standards development initiatives to bring it “up to date”. These are known as the Optical Transport Network (OTN) working in an Optical Transport Hierarchy (OTH) world. OTH is the same nomenclature as used for PDH and SDH networks.

The ITU-T G.709 (released between 1999 – 2003) standard Interfaces for the OTN is a standardised set of methods for transporting wavelengths in a DWDM optical network that allows the use of completely optical switches known as Optical Cross Connects that does not require expensive optical-electrical-optical conversions. In effect G.709 provides a service abstraction layer between services such as standard SDH, IP, MPLS or Ethernet and the physical DWDM optical transport layer. This capability is also known as OTN/WDH in a similar way that the term IP/MPLS is used. Optical signals with bit rates of 2.5, 10, and 40 Gbits/s were standardised in G.709 (G.709 overview presentation) (G.709 tutorial).

The functionality added to SDH in G.709 is:

  • Management of optical channels in the optical domain
  • Forward error correction (FEC) to improve error performance and enable longer optical spans
  • Provides standard methods for managing end to end optical wavelengths

Other SDH extensions to bring SDH up to date and make it ‘packet friendly’

Almost in parallel with the development of G.709 standards a number of other extensions were made to SDH to make it more packet friendly.

Generic Framing Procedure (GFP): The ITU, ANSI, and IETF have specified standards for transporting various services such as IP, ATM and Ethernet over SONET/SDH networks. GFP is a protocol for encapsulating packets over SONET/SDH networks.

Virtual Concatenation (VCAT): Packets in data traffic such as Packet over SONET (POS) are concatenated into larger SONET / SDH payloads to transport them more efficiently.

Link Capacity Adjustment Scheme (LCAS): When customers’ needs for capacity change, they want the change to occur without any disruption in the service. LCAS a VCAT control mechanism, provides this capability.

These standards have helped SDH / SONET to adapt to an IP or Ethernet packet based world which was missing in the original protocol standards of the early 1990s.

Next Generation SDH (NG-SDH)

If a SONET or SDH network is deployed with all the extensions that make it packet friendly is deployed it is commonly called a Next Generation SDH (NG-SDH). The diagram below, shows the different ages of SDH concluding in the latest ITU standards work called T-MPLS ( I cover T-MPLS in: PBT – PBB-TE or will it be T-MPLS?

Transport Ages (Picture credit: TPACK)

Multiservice provisioning platform (MSPP)

Another term in widespread use with advanced optical networks is MSPP.

SONET / SDH equipment use what are known as add / drop multiplexers (ADMs) to insert or extract data from an optical link. Technology improvements enabled ADMs to include cross-connect functionality to manage multiple fibre rings and DWDM in a single chassis. These new devices replaced multiple legacy ADMs and also allow connections directly from Ethernet LANs to a service provider’s optical backbone. This capability was a real benefit to Metro networks sitting between enterprise LANs and long distance carriers.

There are many variant acronyms in use as there are equipment vendors:

  • Multiservice provisioning platform (MSPP): includes SDH multiplexing, sometimes with add-drop, plus Ethernet ports, sometimes packet multiplexing and switching, sometimes WDM.
  • Multiservice switching platform (MSSP): an MSPP with a large capacity for TDM switching.
  • Multiservice transport node (MSTN): an MSPP with feature-rich packet switching.
  • Multiservice access node (MSAN): an MSPP designed for customer access, largely via copper pairs carrying Digital-Subscriber Line (DSL) services.
  • Optical edge device (OED): an MSSP with no WDM functions.

This has been an interesting post in that it has brought together many of the technologies and protocols discussed in the previous posts, in particular SDH, Ethernet and MPLS and joined them to optical networks. It seem strange to say on one hand that the main justification of deploying converged Next Generation Networks (NGNs) based on IP is to simplify existing networks and hence reduce costs, but then consider the complexity and plethora of acronyms and standards associated with that!

I think there is only one area that I have not touched upon and that is the IETF’s initiative – Generalised MPLS (GMPLS) or ASON / ASTN, but that is for another day!

Addendum: Azea Networks, upgrading submarine cables


Colo crisis for the UK Internet and IT industry?

March 29, 2007

The UK Internet and IT industries are facing a real crisis that is creeping up on them at a rate of knots (Source: theColocationexchange).In the UK we often believe that we are at the forefront of innovation and the delivery of creative content services such as Web 2.0 based and IPTV, but this crisis could force many of these services to be delivered from non-UK infrastructure over the next few years.

So, what are we talking about here? It’s the availability of colocation (colo) services and what you need to pay to use them. Colocation is an area where the UK has excelled and has lead Europe for a decade, but this could be set to change over the next twelve months.

It’s no secret to anyone that hosts an Internet service that prices have gone through the roof for small companies in the last twelve months, forcing many of the smaller hosters to just shut up shop. The knock-on effects of this will have a tremendous impact on the UK Internet and IT industries as it also impacts large content providers, content distribution companies such as Akamai, telecom companies and core Internet Exchange facilities such as LINX. In other words, pretty much every company involved in the delivering Internet services and applications.

We should be worried.

Estimated London rack pricing to 2007 (Source: theColocationexchange)

The core problem is that available co-location space is not just in short supply in London, it is simply disappearing at an alarming and accelerating rate as shown in the chart below (It is even worse in Dublin). It could easily run out to anyone who does not have a deep pocket.

Estimated space availability in London area (Source: theColocationexchange)

What is causing this crisis?

Here are some of the reasons.

London’s ever increasing power as a world financial hub: According to the London web site: “London is the banking centre of the world and Europe’s main business centre. It is the headquarters of more than 100 of Europe’s 500 largest companies are in London and a quarter of the of the world’s largest financial companies have their European headquarters in London too. The London foreign exchange market is the largest in the world, with an average daily turnover of $504 billion, more than the New York and Tokyo combined.”

This has been a tremendous success for the UK and driven a phenomenal expansion of financial companies needs for data centre hosting and they have turned to 3rd party colo providers to provide these needs. In particular, the need for disaster recovery has driven them to not only expand their own in-hose capabilities but to also place infrastructure in in 3rd party facilities. Colo companies have welcomed these prestigious companies with open arms in the face of the telecomms industry meltdown post 2001.

Sarbanes-Oxley compliance: The necessity for any company that operates in the USA to comply with the onerous Sarbanes-Oxley regulations has had a tremendous impact on the need to manage and audit the capture, storage, access, and sharing of company data. In practice, more analysis and more disk storage are needed leading to more colo space requirements.

No new colo build for the last five years: As in the telecommunications world, life was very difficult for colo operators in the five years following the new millennium. Massive investments in the latter half of the 1990s was followed by pretty much zero expansion of the industry which remained effectively in stasis. One exception to this is IX Europe who are expanding their facilities around Heathrow. However, builds such as this will not have any great impact on the market overall even though it will be highly profitable for the companies expanding.

However, in the last 24 months both the telecomms and the colo industries are seeing a boom in demand and a return to a buoyant market last seen in the late 1990s (Picture credit: theColocationexchange).

Consolidation: In London particularly, there has been a large trend to consolidation and roll-up of colo facilities. A prime example of this would be Telecity, (backed by private equity finance from 3i Group, Schroders and Prudential),  who have bought Redbus and Globix in the last twenty four months. This included a number of smaller colo operators that focused on supplying smaller customers. The now larger operators have really concentrated on winning the lucrative business of corporate data centre outsourcing which is seen to be highly profitable with long contract periods.

Facility absorption: In a similar way that many telecommunications companies were sold at very low prices between post 2000,  the same trend happened in the colo industry. In particular, many of the smaller colos west of London were bought at knock down valuations by carriers, large third party systems integrators and financial houses. This effectively took that colo space permanently off the market.

Content services: There has been a tremendous infrastructure growth in the last eighteen months by the well known media and content organisations. This includes all the well known names such as Google, Yahoo and Microsoft. It also includes companies delivering newer services such as IP TV and content distribution companies such as Akamai. It could be said with justification, that this growth is only just beginning and these companies are competing directly with the financial community, enterprises and carriers for what little colo space there is left.

Carrier equipment rooms: Most carriers have their own in-house colo facilities to house their own equipment or offer colo services to their customers. Few colos have invested in increasing in these facilities in the last few years so most are now 100% full forcing carriers to go to 3rd party colos for expansion.

Instant use: When enterprises buy space today they immediately use it rather then letting it lie fallow.

How has the Colo industry reacted to this ‘Colo 2.0′ spurt of growth?

With demand going through the roof and with a limited amount of space available in London, it is clearly a seller’s market. Users of colo facilities have seen rack prices increase at an alarming rate. For example: Colocation Price Hikes at Redbus.

However, the rack price graph above does not tell the whole story as power, which used to be a small additional charge or even thrown in for free, have risen by a factor of three or even four in the last twelve months.

Colos used to focus on selling colo space solely on the basis of rack footprints. However, the new currency they use is Amperes, not square feet measured in rack footprints. This is an interesting aspect that is not commonly understood by individuals who have not had to by buy space in the last twelve months.

This is caused because colo facilities are not only capped in the amount of space they have to place racks, they also have caps on the amount of electricity that a site can take from their local power companies. Also, as a significant percentage of this input power is turned into heating the hosted equipment, colo facilities have needed to make significant investment in coolers to keep the equipment operating within their temperature specifications. They also need to invest in appropriate back-up power generators and batteries to power the site in case of a external power failure.

Colo contracts are now principally based on the amount of current the equipment consumes, not its footprint. If the equipment in a rack only takes up a small space but consumes, say 8 to 10 Amps, then the rest of the rack has to remain empty unless you are willing pay an additional full-rack’s worth of power.

If a rack owner sub-lets shelf space in a rack to a number of their customers, each one has to be monitored with individual Ammeters placed on each shelf.

One colo explains this to their customers in a rather clumsy way:

“Price Rises Explained: Why has there been another price change?

By providing additional power to a specific rack/area of the data floor, we are effectively diverting power away from other areas of the facility, thus making that area unusable. In effect, every 8amps of additional power is an additional rack footprint.

The price increase reflects the real cost to the business of the additional power. Even with the increase, the cost per additional 8amps of power is still substantially less, almost half the cost of the standard price for a rack foot print including up to 8amp consumption.”

Another point to bear in mind here is the current that individual servers consume. With the sophisticated power control that is embedded into today’s servers – just like your home PC – there is a tremendous difference in the amount of current a server consumes in its idle state compared to full load. The amount of equipment placed in a rack is limited by the possible full load current consumption even if average consumption is less. In the case of an 8 Amp rack limit, there would also be a hard trip installed by the colo facility that turns the rack off if current reaches say 10 Amps.

If the equipment consists of standard servers or telecom switches this can be accommodated relatively easily, but if a company offers services such as interactive games or IPTV service and fills a rack with blade (card) servers, this can quite easily consume 15 to 20kW of power or 60 Amps! I’ll leave it you to undertake the commercial negotiations with your colo but take a big chequebook!

What could be the consequences of the up and coming crisis?

The empty floor as seen in the picture are long gone in colo facilities in London.

Virtual hosters caught in a trap: Clearly if a company does not own its own colo facilities but offers colo based services to their customers, it could prove to be very difficult and expensive  to guarantee access to sufficient space, sorry Amperage, to meet their customer’s growth needs in an exploding market. As in the semiconductor market where fabless companies are always the first hit in boom times, those companies that rely on 3rd party colos could have significant challenges facing them in coming months.

No low cost facilities will hit small hosting companies:  The issues raised in this post are significant ones even for multinational companies, but for small hosting companies they are killers. Many small hosting companies who supply SME customers have already put the shutters on their business as it has proved not to be cost effective to pass these additional costs on their customers.

Small Web 2.0 service companies and start-ups: The reduction in availability of  low-cost colo hosting could have a tremendous impact on small Web 2.0 service development companies where cash flow is always a problem. Many of these companies used to go to a “sell it cheap” colo but there are fewer and fewer of that can be resorted to. If small companies go to these lower cost colos then you can placing their services in potential jeopardy as the colo might have only one carrier fibre connection to the facility and if that goes down or no power back-up capabilities…

Its not so easy to access lower cost European facilities:  There is space available in some mainland European cities and at rates considerably lower than those seen in London. However, their possible use does raise some significant issues:

  • A connection needs to be paid for between the colo centres. If talking about multi Gbit/s bandwidths these does not come cheap. They also need to be backed up by at least a second link for resilience.
  • For real time applications – games or synchronous backup, the addition transit delays can prove to be a significant issue.
  • Companies will need local personal to support your facility and this can be very expensive and also represents an expensive and long term commitment  in many European counties.

I called this post a Crisis for the UK Internet and IT industry? I hope that the issues outlined do not have a measurable negative impact in the UK, but I’m really not sure that that will be the case. Even if there is a rush to build new facilities in 2007 it will take 18 to 24 months for them to come on line. If this trend continues, a lot of application and content service providers will be forced to provide them from Europe or the USA with a consequent set of knock-on effects for the UK.

I hope I’m wrong.

Note: I would like to acknowledge a presentation given by Tim Anker of the theColocationexchange at the Data Centres Europe conference in March 2007 which provided the inspiration for this post. Tim has been concerned about this issues and has been bringing to the attention of companies for the last two years.


Ethernet-over-everything

March 26, 2007

And you thought Ethernet was simple! It seems I am following a little bit of an Ethernet theme at the moment, so I thought that I would have a go at listing all (many?) of the ways Ethernet packets can be moved from from one location to another. Personally I’ve always found this confusing as there seems to be a plethora of acronyms and standards. I will not cover wireless standards in this post.

Like IP (Internet protocol not Intellectual Property!), the characteristics of an Ethernet connection is only as good as the bearer service, it is being carried over and thus most of the standards are concerned with that aspect. Of course, IP is most often carried over Ethernet so performance characteristics of the Ethernet data path bleed through to IP as well. Aspects such as service resilience and Quality of Service (QoS) are particularly important.

Here are the ways that I have come across to transport Ethernet.

Native Ethernet

Native Ethernet in its original definition runs over twisted-pair, coaxial cables or fibre (Even though Metcalfe called their cables The Ether). A core feature called carrier sense multiple access with collision detection (CSMA/CD) enabled multiple computers to share the same transmission medium. Essentially this works by a node resending a packet when it did not arrive at its destination because it was lost by colliding with a packet sent from another node at the same time. This is one of the principle aspect of native Ethernet that is when used on a wide area basis as it is not needed.

Virtual LANs (VLANs): An additional capability to Ethernet was defined by the IEEE 802.1Q standard to enable multiple Ethernet segments in an enterprise to be bridged or interconnected sharing the same physical coaxial cable or fibre while keeping each bridge private. VLANs are focused on single administrative domain where all equipment configurations are planned and managed by a single entity. What is know as Q-in-Q (VLAN stacking) emerged as the de facto technique for preserving customer VLAN settings and providing transparency across a provider network.

IEEE 802.1ad (Provider Bridges) is an amendment to IEEE 802.1Q-1998 standard that the definition of Ethernet frames with multiple VLAN tags.

Ethernet in the First Mile (EFM): In June, 2004, the IEEE approved a formal specification developed by its IEEE 802.3ah task force. EFM focuses on standardising a number of aspects that will help Ethernet from a network access perspective. In particular it aims to provide a single global standard enabling complete interoperability of services. The standards activity encompasses: EFM over fibre, EFM over copper, EFM over passive optical network and Ethernet First Mile Operation, Administration, and Maintenance. Combined with whatever technology a carrier deploys to carry Ethernet over its core network, EFS enables full end-to-end Ethernet wide area services to be offered.

Over Dense Wave Division Multiplex (DWDM) optical networks

10GbE: The 10Gbit/s Ethernet standard was published in 2006 and offers full duplex capability by dropping CSMA/CD. 10GbE can be delivered over carrier’s DWDM optical network.

Over SONET / SDH

Ethernet over Sonet / SDH (EoS): For those carriers that have deployed SONET / SDH networks to support their traditional voice and TDM data services, EoS is a natural service to offer following a keep it simple approach as it does not involve tunnelling as would be needed using IP/MPLS as the transmission medium. Ethernet frames are encapsulated into SDH Virtual Containers. This technology is often preferred by customers as it does not involve the transmission of Ethernet via encapsulation over an IP or MPLS shared network which is often seen as a perceived performance or security risk by enterprises (I always see this as a non-logical concern as ALL public networks use shared networks at all levels).

Link Access Procedure – SDH (LAPS): LAPS is a variant of the original LAP protocol, is an encapsulation scheme for Ethernet over SONET/SDH. LAPS provides a point-to-point connectionless service over SONET/SDH. and enables the encapsulation of IP and Ethernet data.

Over IP and MPLS:

Layer 2 Tunnelling Protocol (L2TP). L2TP was originally standardised in 1999 but an updated version was published in 2005- L2TPv3. L2TP is a Layer 2 data-link protocol that enables data link protocols to be carried on IP networks along side PPP. This includes Ethernet, frame relay and ATM. L2TPv3 is essentially a point-to-point tunnelling protocol that is used to interconnect single- domain enterprise sites.

L2TPv3 is also known as a Virtual Private Wire service (VPWS) and is aimed at native IP networks. As it is a pseudowire technology it is grouped with Any Transport over MPLS (AToM).

Layer 2 MPLS VPN (L2VPN): Customer’s networks are separated from each other on a shared MPLS network using MPLS Label Distribution Protocol (LDP) to set up point-to-point Pseudo Wire Ethernet links. The picture below shows individual customer sites that are relatively near to each other connected by L2TPv3 or L2VPN tunnelling technology based on MPLS Label Switched paths.

Virtual Private LAN Service (VPLS): A VPLS is a method of providing a fully meshed multipoint wide area Ethernet service using Pseudo Wire tunnelling technology. VLPS is a Virtual Private Network (VPN) that enables all LANs on a customer’s premises connected to it are able to communicate with each other. A new carrier that has invested in an MPLS network rather than an SDH / SONET core network would use VPLS to offer Ethernet VPNs to their customers. The picture below shows a VPLS with LSP link containing multiple MPLS Pseudo-Wires tunnels.

MEF: The MEF defines several types of Virtual Private Wire Services (VPWS) services:

Ethernet Private Line (EPL). An EPL service supports a single Ethernet VC (EVC) between two customer sites.

Ethernet Virtual Private Line (EVPL). An EVPL service supports multiple EVCs between two two customer sites.

Virtual Private Line Service (VPLS) or Ethernet LAN (E-LAN) service supports multiple EVCs between multiple customer sites.

These MEF-created service definitions, which are not standards as such (indeed they are independent of standards), enable equipment vendors and service providers to achieve 3rd certification for their products.

Looking forward:

100GbE: In 2006, the IEEE’s Higher Speed Study Group (HSSG), tasked with exploring what Ethernet’s next speed might be, voted to pursue 100G Ethernet over other offerings, such as 40Gbit/s Ethernet to be delivered in the 2009 /10 time frame. The IEEE will work to standardize 100G Ethernet over distances as far as 6 miles over single-mode fiber optic cabling and 328 feet over multimode fibre.

PBT or PBB-TE: PBT is a group of enhancements to Ethernet that are defined in the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) group. I’ve covered this in Ethernet goes carrier grade with PBT / PBB-TE?

T-MPLS: -MPLS is a recent derivative of MPLS – I have covered this in PBB-TE / PBT or will it be T-MPLS?

Well, I hope I’ve covered most of the Ethernet wide area transmission standards activities here. If I havn’t I’ll add others as addendums. At least they are all on one page!


Islands of communication or isolation?

March 23, 2007

One of the fundamental tenets of the communication industry is that you need 100% compatibility between devices and services if you want to communicate. This was clearly understood when the Public Switched Telephone Network (PSTN) was dominated by local monopolies in the form of incumbent telcos. Together with the ITU, they put considerable effort into standardising all the commercial and technical aspects of running a national voice telco.

For example, the commercial settlement standards enabled telcos to share the revenue from each and every call that made use of their fixed or wireless infrastructure no matter whether the call originated, terminated or transited their geography. Technical standards included everything from compression through to transmission standards such as Synchronous Digital Hierarchy (SDH) and the basis of European mobile telephony, GSM. The IETF’s standardisation of the Internet has brought a vast portion of the world’s population on line and transformed our personal and business lives.

However, standardisation in this new century is now often driven as much by commercial businesses and business consortiums which often leads to competing solutions and standards slugging it out in the market place (e.g. PBB-TE and T-MPLS). I guess this is as it should be if you believe in free trade and enterprise. But, as mere individuals in this world of giants, these issues can cause us users real pain.

In particular, the current plethora of what I term islands of isolation means that we often unable to communicate in ways that we wish to. In the ideal world, as exemplified by the PSTN, you are able to talk to every person in the world that owns a phone as long as you know their number. Whereas, many, if not most, new media communications services we choose to use to interact with friends and colleagues are in effect closed communities that are unable to interconnect.

What are the causes these so-called islands of isolation? Here are a few examples.

Communities: There are many Internet communities including free PC-to-PC VoIP services, instant messaging services, social or business networking services or even virtual worlds. Most of these focus on building up their own 100% isolated communities. Of course, if one achieves global domination, then that becomes the de facto standard by default. But, of course, that is the objective of every Internet social network start-up!

Enterprise software: Most purveyors of proprietary enterprise software thrive on developing products that are incompatible. Lotus Notes and Outlook email systems was but one example. This is often still the case today when vendors bolt advanced features onto the basic product that are not available to anyone not using that software – presence springs to mind. This creates vendor communities of users.

Private networks: Most enterprises are rightly concerned about security and build strong protective firewalls around their employees to protect themselves from malicious activities. This means that employees of that company have full access to their own services but these are not available to anyone outside of the firewall for use on an inter-company basis. Combine this with the deployment of vendor specific enterprise software described about and you create lots of isolated enterprise communities!

Fixed network operators: It’s a very competitive world out there and telcos just love offering value-added features and services that are only offered to their customer base. Free proprietary PC-PC calls come to mind and more recently, video telephones.

Mobile operators: A classic example with wireless operators was the unwillingness to provide open Internet access and only provide what was euphemistically called ‘walled garden’ services – which are effectively closed communities.

Service incompatibilities: A perfect example of this was MMS, the supposed upgrade to SMS. Although there was a multitude of issues behind the failure of MMS, the inability to send an MMS to a friend who used another mobile network was one of the principle ones. Although this was belatedly corrected, it came too late to help.

Closed garden mentality: This idea is alive and well amongst mobile operators striving to survive. They believe that only offering approved services to their users is in their best interests. Well, no it isn’t!

Equipment vendors: Whenever a standards body defines a basic standard, equipment vendors nearly always enhance the standard feature set with ‘rich’ extensions. Of course, anyone using an extension could not work with someone who was not! The word ‘rich’ covers a multiplicity of sins.

Competitive standards: Users groups who adopt different standards become isolated from each other – the consumer and music worlds are riven by such issues.

Privacy: This is seen as such an important issue these days that many companies will not provide phone numbers or even email addresses to a caller. If you don’t know who you want, they won’t tell you! A perfect definition of a closed community!

Proprietary development:  In the absence of standards companies will develop pre-standard technologies and slug it out in the market. Other companies couldn’t care less about standards and follow a proprietary path just because they can and have the monopolistic muscle to do so. Bet – you can name one or two of those!

One take away from all this is that in the real world you can’t avoid islands of isolation and all of us have to use multiple services and technologies to interact with colleagues that are effectively islands of isolation and will probably remain so for the indefinite future in the competitive world we live in.

Your friends, family and work colleagues, by their own choice, geography and lifestyle, probably use a completely different set of services to yourself. You may use MSN, while colleagues use AOL or Yahoo Messenger. You may choose Skype but another colleague may use BT Softphone.

There are partial attempts at solving these issues with a subset of islands, but overall this remains a major conundrum that limits our ability to communicate at any time, any place and any where. The cynic in me says that if you hear about any product or initiative that relies on these islands of isolation disappearing to succeed I would run a mile – no ten miles! On the other hand, it could be seen as the land of opportunity?


Video history of Ethernet by Bob Metcalfe, Inventor of Ethernet

March 22, 2007

The History of Ethernet

The Evolution of Ethernet to a Carrier Class Technology

The story of Ethernet goes back some 32 years to May 22, 1973. We had early Internet access. We wanted all of our PC’s to be able to access the Internet at high speed. So we came up with the Ethernet…


Follow

Get every new post delivered to your Inbox.