Chaos in Bangladesh’s ‘lllegal’ VoIP businesses

April 11, 2007

Chaos in Bangladesh’s Illegal VoIP business

Take a listen to a report on BBC Radio Four’s PM programme broadcast on the 9th April which talks about the current chaos in Bangladesh brought about by the enforced closure of ‘illegal’ VoIP businesses. This is one of the impacts of the state of emergency imposed three months ago and has resulted in a complete breakdown of the Bangladeshi phone network.

It seems that VoIP calls accounts for up to 80% of telephone traffic from abroad in the country driven by low call rates of between 1 and 2pence per minute.

The new military backed government has been waging war on small VoIP businesses with the “illegality and corruptions of the the past being too long tolerated”. Many officials have been arrested, buildings pulled down and businesses closed.

The practical result has thrown the telephone industry into chaos as hundreds of thousands of Bangladeshi’s living abroad try to call home home only to get the engaged tone.

“In many countries VoIP is legal but in Bangladesh it has been long rumoured that high profile politicians have been operating the VoIP businesses and had an interest in keeping them outside of the law and unregulated to avoid taxes on the enormous revenues they generated.”

The report says that the number of conventional phone lines is being doubled in April but to 30,000 lines but with a population of over 140 million people this is too few!

You can listen to the report here Chaos in Bangladesh’s Illegal VoIP business Copyright BBC

It really is amazing how disruptive a real disruptive technology can be, but when this happens it usually comes back to bite us!

I talked about the Sim Box issue in Revector, detecting the dark side of VoIP, and the Bangladesh situation provides the reasoning about why incumbent carriers are often hell bent on stamping VoIP traffic out. In the western world, the situation is no different, but governments and carriers do not just bulldoze the businesses – maybe they should in some cases!

Addendum #1: the-crime-of-voice-over-ip-telephony/

iotum’s Talk-Now is now available!

April 4, 2007

In a previous post The magic of ‘presence’, I talked about the concept of presence in relation to telecommunications services and looked at different examples of how it had been implemented in various products.

One of the most interesting companies mentioned was iotum, a Canadian company. iotum had developed what they called a relevance engine which enabled the provision of ability to talk and willingness to talk information into a telecom service by attaching it to appropriate equipment such as a Private Branch Xchanges (PBX) or a call centre Automatic Call Distribution (ACD) managers.

One of the biggest challenges for any company wanting to translate presence concepts into practical services is how to make it useable rather than just being just a fancy concept that is used to describe a of a number peripheral and often unusable features of a service. Alec Saunders, iotum’s founder, has been articulating his ideas about this in his blog Voice 2.0: A Manifesto for the Future. Like all companies that have their genesis in the IT and applications world, Alec believes that “Voice 2.0 is a user-centric view of the world… “it’s all about me” — my applications, my identity, my availability.

And rather controversially, if you come from the network or the mobile industry: “Voice 2.0 is all about developers too — the companies that exploit the platform assets of identity, presence, and call control. It’s not about the network anymore.” Oh by the way, just to declare my partisanship, I certainly go along with this view and often find that the stove-pipe and closed attitudes sometimes seen in mobile operators is one the biggest hindrances to the growth of data related applications on mobile phones.

There is always a significant technical and commercial challenge to OEMing platform-based services to service providers and large mobile operators so the launch of a stand-alone service that is under complete control of iotum is not a bad way to go. Any business should have to full control of their own destiny and the choice of the relatively open Blackberry platform gives iotum a user base they can clearly focus on to develop their ideas.

iotum launched the beta version of Talk-Now in January and provides a set of features that are aimed at helping Blackberry users to make better use of the device that the world has become addicted to using in the last few years. Let’s talk turkey, what does the Talk-Now service do?

According to web site, as seen in the picture on the left, it provides a simple-in-concept bolt-on service for Blackberry phone users to see and share their availability status to other users.

At the in-use end of the service, the Talk-Now service interacts with a Blackberry user’s address book by adding colour coding to contact names to show the individual’s availability. On initial release only three colours were used, white, red and green.

Red and and green clearly show when a contact is either Not-Available or Available, I’ll talk about white in a minute. Yellow was added later, based on user feedback, to indicate an Interruptible status.

The idea behind Talk-Now is that helps users reduce the amount of time they waste in non-productive calls and leaving voicemails. You may wonder how this availability guidance is provided by users. A contact with a white background provides the first indication of how this is achieved.

Contacts with a white background are not Talk-Now users so their availability information is not available (!) so one of the key features of the service is an Invite People process to get them to use Talk-Now and see your availability information.

If you wish a non-Talk-Now contact to see your availability, you can select their name from the contact list and send them an “I want to talk with you” email. This email will provide a link to an Availability Page as shown below. This email talks about the benefits of using the service (I assume) and asks you to use the service. This is a secure page that is only available to that contact and for a short time only.

Once a contact accepts the invite and signs up to the service, you will be able to see their availability – assuming that they set up the service.

So, how do you indicate your availability? This is set up with a small menu as shown on the left. Using this you can set up status information.

Busy: set your free/busy status manually from your BlackBerry device

In a meeting: iotum Talk-Now synchronizes with your BlackBerry calendar to know if you are in a meeting.

At night: define which hours you consider to be night time.

Blocked group: you can add contacts to the “blocked” group.

You can also set up VIPs (Very Important Persons) who are individuals who receive priority treatment. This category needs to be used with care. Granting VIP status to a group overrides the unavailability settings you have made. You can also define Workdays. Some groups might be VIPs during work hours, while other groups might get VIP status outside of work. This is designed to help you better manage your personal and business communications.

There is also a feature whereby you can be alerted when a contact becomes available by a message being posted on your Blackberry as shown on the right.


Many of the above setting can be set up via a web page, for example:

Setting your working week

Setting contact groups

However, it should be remembered that like Plaxo and LinkedIn, this web based functionally does require you to upload – ‘synchronise’ – your Blackberry contact list to the iotum server and many Blackberry users might object to this. It should be noted as well that the calendar is accessed as well to determine when you are in meetings and deemed busy.

If you want to hear more, then take a look at the video that was posted after a visit with Alec Saunders and the team by Blackberry Cool last month:

Talk-Now looks to be an interesting and well thought out service. Following traditional Web 2.0 principles, the service is provided for free today with the hope that iotum will be able to charge for additional features at a future date.

I wish them luck in their endeavours and will be watching intensely to see how they progress in coming months.

MPLS-TE and network traffic engineering

April 2, 2007

In my post entitled The rise and maturity of MPLS I said that one of the principle reasons for a carrier to implement MPLS was the need for what is known as traffic engineering in their core IP networks. Before the advent of MPLS this capability was supplied by ATM with many of the world’s terrestrial Internet backbones being based on this technology. ATM provided a switching capability in Points of Presence(PoPs) that enabled the automatic switchover to an alternative inter-city ATM pipe in case of failure. I say ‘intercity’ because ATM was not generally implemented on a transoceanic basis because ATM was deemed to be expensive and inefficient due to its 17% overhead commonly known as cell tax (Picture: Aria networks planning software. (Presentation to the UK Network Operators Forum 2006).IP engineers were keen to remove this additional ATM layer and replace it with it a control capability which became MPLS. However MPLS in its original guise did not really ‘cut the mustard’ for use in a traffic engineered regime so the standard was enhanced through the release of extensions known as MPLS-TE. This post will look at Traffic Engineering and its sibling activity Capacity Planning and their relationship to MPLS.

Capacity planning

Capacity planning is an activity that is undertaken by all carriers on a ‘regular’ basis. Nowadays, this would most likely be undertaken on an annual basis, although in the heady days of the latter half of the 90s of heavy growth it was unlikely to be undertaken on less than a quarterly basis.

Capacity planning is an important function as it directly drives the purchase of additional network equipment or the purchase or building of addition bandwidth capacity for the physical network. Capacity planning is undertaken by the planning department and principally consists of the following activities:

Network topology: The starting point of any planning exercise is to profile the existing network to act as a benchmark to build upon. This consists of two things. The first is a complete database of the nodes or PoPs and network link bandwidths i.e. their maximum capacities. This sounds easier than it is in reality. In many instances carriers do not know the full extent of the equipment deployed which is often the result of one too many acquisitions. This discovery of assets can either be based an on-going manual spreadsheet or database exercise or software can be used to automatically discover the up to date installed network topology. Another way is the export network configurations from network equipment such as routers.

Traffic Matrices: What is needed next is detailed link resource utilisation data or traffic profiles for each service type. This is often called traffic Matrices. Links are the pipes interconnecting a network’s PoPs and their utilisation is how much of the links’ bandwidth is being used. As IP traffic is very dynamic and varies tremendously according to the time of day, good traffic engineering leads to good operational practice such as never loading a link beyond a certain percentage – say 50%. Every carrier would have their own standard which could be quite easily be made higher to save money but could risk poor network performance at peak times. Clearly, engineers and accountants have different perspectives in these discussions! (Raw IP traffic flow: Credit. Cariden.)

Demand Forecast: At this point, capacity planning engineers make a request to their product marketing and sales brethren with a request for a service sales forecast for the next planning cycle which could between one and three years. If you talk to any planning engineer I’m sure you will hear plaintive cries such as “I can’t plan unless I get a forecast” however, can you think of a worse group of individuals to get this sort of information from than sales people? I would guess that this is one of the biggest challenges planning departments face.

Once topology, current traffic matrices and forecasts for each service (IP transit, VoIP, IP VPNs IPTV etc.) has been obtained then the task of planning for the next capacity planning period can begin. This results in – or should result in a clear plan for the company that covers such issues as:

  • What existing link upgrades are required
  • What new links are required
  • What new or expansion to backup links are required
  • What new Capital Expenditure (CAPEX) is required
  • What increase in Operational Expenditure (OPEX) is required
  • What new migration or changeover plans are required
  • Lots of management reports, spreadsheets and graphs
  • Caveats about the on-going unreliability of the growth forecasts received!

Traffic engineering (TE)

While Capacity Planning is a long-term forward looking activity that is concerned with optimising network growth and performance in the face of growing service demand, traffic engineering is focused on how the network performs in respect of delivering services at a much finer granularity.

Traffic engineering in networks has a history as long as telephones have been around and was closely associated with A.K. Erlang. One of the fundamental metrics in the voice Public Switched Telephony Networks (PSTN) was named after him- the Erlang. An Erlang is a measure of the occupancy or utilisation of voice links regardless of whether traffic was flowing is not. Erlang-based calculations were / are used to calculate Quality of Service (QoS) and the optimum utilisation of fixed bandwidth links taking into account the amount of traffic at peak times.

Traffic engineering is even more important in the highly dynamic world of IP networks and carriers are able to experience a considerable number of benefits if traffic engineering is taken seriously by management.

  • Cost optimisation: Providing network links is an expensive pastime by the time you take IP equipment, optical equipment, OSS costs and OPEX into account. The more that a network is fully utilised without degradation, the more money can flow to the bottom line.
  • Congestion management: If a network is badly traffic engineered either through under-planning, under-spending or under-resourcing, the more chance there is for network problems such as outages or congestion to impact a customer’s experience. The telecoms world is stuffed full of examples of where this has happened.
  • Dynamic services and traffic profiles: Traffic profiles and flow can change quite considerably over a period of time when new services with different traffic profiles are launched without involving network planners. In an age when when there is considerable management pressure to reduce new time-to-market timescales this can happen more often that many companies would admit to.
  • Efficient routing: In MPLS and the limitations of the Internet I wrote about about one of the strengths of the IP protocol was that a packet could always find a path to its destination if one existed but that strength created problems when the service required predictable performance. Traffic engineered networks provide paths for critical services that are deterministic / predictable from a path perspective and from a Quality of Service (QoS) perspective. It would not be an overstatement to say that this is pretty much mandatory in these days of converged Next Generation networks.
  • Availability, resilience and fast restoration: If a network’s customers sees an outage at any time, the consequences can be catastrophic from a churn or brand image perspective so high Availability is a crucial network metric that needs to be monitored.. There is a tremendous difference in perceived reliability between PSTN voice network and IP networks. For example, tell me the last time your home telephone broke down? It’s not that PSTN networks are more reliable that IP networks, they’re not, it’s just that those PSTN networks have been better designed to transparently work around broken equipment or broken links. Subscribers, to use that old telephony term, are blissfully unaware of a network outage. Of course, if a digger cuts through a major fibre and the SDH backbone ring that is not actually a ring… Well, that’s another story.
  • QoS and new services: Real time services need an ability to separate latency-critical services such as VoIP from non-critical services such as email. Traffic engineering is a critical tool in achieving this.

Multi-protocol Label Switching – Traffic Engineering (MPLS-TE)

The term ‘-TE’ is used in relation to describe other services as well, notably in the attempts to make Ethernet carrier grade in quality which is discussed in my posts on PBB-TE and T-MPLS which is being built on the back of MPLS-TE (Picture credit OpNet planning software).

As mentioned above, before the advent of MPLS-TE, carriers of IP traffic relied on the underlying transport for traffic engineering – networks such as ATM. MPLS-TE consisted of a set of extensions to MPLS that enabled native traffic engineering within an MPLS environment. Of course, this does not remove the need to traffic engineer any layer-1 transport network that MPLS may be transported over. MPLS-TE was covered in in the IETF RFC 2702 standard.

What does MPLS-TE bring to the TE party?

(1) Explicit or constraint-based end-to-end routing: The picture below shows a small network where traffic flowing from the left could travel via two alternative paths to exit on the right. This was specifically the environment that Internal Gateway Routing (IGP) routing algorithms such as Open shortest Path (OSPG) and Intermediate System – Intermediate System (IS-IS) were designed to operate in by specifically routing all traffic over the shortest path as shown below (Picture credit: NANOG).

This inevitably could lead to problems with the north path shown above becoming congested while the south path remains unused wasting expensive network assets. Before MPLS-TE, standard IGP routing algorithm metrics could be ‘adjusted’, ‘manipulated’ or ‘tweaked’ to reduce this possibility, however, doing this could be very complicated and be very challenging to manage on a day-to-day basis. Such an approach usually required a network-wide plan. In other words, it is a bit of a horror to manage.

With MPLS-TE, using an extension to signalling protocol Resource Reservation Protocol (RSVP) known as, not unsurprisingly as RSVP-TE, explicit paths can be set up to force selected traffic flows to flow over through them as shown below.

This deterministic routing helps with reducing congestion on particular links, helps load the network more evenly thus reducing the number of ‘orphaned links’, ensures optimal utilisation of the network, helps planners separate latency dependent services from non-critical services and better manage upgrade costs (Picture credit: NANOG).

These paths are called TE tunnels or label switched paths (LSPs). LSPs are unidirectional so two need to be specified to handle bi-directional traffic between two nodes.

(2) Constraint Based Routing: Network planners are now able to undertake what is known as Constraint Based Routing where traffic paths can be computed (aka, Path Computation) that meets certain constraints other than the path with the least number of nodes or PoPs as drives OSPF and IS-IS. This could be links with the least utilisation, least delay, with the most free bandwidth, or links that utilise a carriers own, rather than a partner’s infrastructure .

(3) Bandwidth reservation: MPLS-TE DiffServe (DS-TE) enables per-class TE across an MPLS-TE network. Physical interfaces and TE tunnels / LSPs can be told how much bandwidth can be reserved or used. This can be used to dynamically allocate, share and adjust over time bandwidth given to critical services such as VoIP and to best effort traffic such as Internet browsing and email.

(4) Fast Re-Route (FRR): MPLS-TE supports local rerouting around a faulty node (node protection) or faulty link (Link Protection). Planners can define alternative paths to be used when failure occurs. FRR can reroute traffic in tens of milliseconds minimising down time. However, although FRR sounds like a good idea, the amount of computing effort required for calculating FRR paths for a complete network is very significant. If a carrier does not have the appropriate path computation tools, using FRR could cause significant problems by rerouting traffic non-optimally to segment of network that is already congested rather than a segment that is under-utilised (Picture: An LSP tunnel and its backup, Credit: Wandl).

There are other additions to MPLS covered by the MPLS-TE extensions but these are minor compared to ones described above.

Practical use of MPLS-TE

One would imagine that with all the benefits that would accrue to a carrier by using MPLS-TE such as enhancing service quality, enhancing new service deployment and reducing risk, carriers would flock to using MPLS-TE. However, this is not necessarily the case as, with all new technologies, there are alternatives such as:

  • Traditional over-provisioning: Traffic engineering management can be a very complicated task if you attempt to analyse all the flows within a large network. One of the traditional ways that many carriers get round this onerous and challenging task is to simply well over-provision their networks. If a network is geographically constrained or is simply simple, then throwing bandwidth at the network can be seen as a simple and unchallenging solution. Network equipment is so much cheaper than it used to be (and smaller carriers can buy equipment from eBay – cough, cough!). Dark fibre or multiGbit/s links can be bought or leased relatively cheaply as well. So why bother putting in the effort to traffic engineer a network properly?
  • The underlying network does the TE: Many carriers still use the underlying network to traffic engineer their networks. Although ATM is not around as much as it used to be, SDH still is.
  • Stick with IGP adjustment: Many carriers still stick to simple IGP metric adjustment discussed earlier as it handles the simple TE activities they require. True, many would moan about how difficult to manage this is but to migrate to an MPLS-TE environment could be seen as a costly exercise and they currently do not have the time, resource or money to undertake the transitiuon.
  • Let’s wait and see: There are so many considerations and competitive options that the easiest decision to make is to do nothing.

Round up

IP traffic engineering is a hot subject and brings forth a considerable variety of views and emotions when discussed. Many carriers have stuck with methods that they have outgrown but are hesitant about making the jump thinking that something better will come along. Many try to avoid the issue completely by simply over-provisioning and taking a an an ultra-KISS approach.

However, those carriers that are truly pursuing a converged Next Generation architecture with all services based on IP and legacy services carried by pseudowire tunnels, cannot avoid the issue of undertaking real traffic engineering to the degree undertaken by the old PSTN planning departments. To a great extent, this could be seen as the wild child of IP networks growing up to become a mature and responsible adults. The IP industry has a long way to go though as creating standards can be seen as difficult enough but getting people to use them is something else!

Whatever else, simply sitting and waiting is not the solution…

Aria Networks (UK)

the economics of network control (USA)

Making networks perform (USA)
You have one network, We have one plan (USA)

Wandl wide area design laboratory (USA)

Addendum #1: The follow-on article to this post is: Path Computation Element (PCE): IETF’s hidden jewel

Addendum #2: GMPLS and common control

Addendum #3: Aria Networks shows the optimal path

Colo crisis for the UK Internet and IT industry?

March 29, 2007

The UK Internet and IT industries are facing a real crisis that is creeping up on them at a rate of knots (Source: theColocationexchange).In the UK we often believe that we are at the forefront of innovation and the delivery of creative content services such as Web 2.0 based and IPTV, but this crisis could force many of these services to be delivered from non-UK infrastructure over the next few years.

So, what are we talking about here? It’s the availability of colocation (colo) services and what you need to pay to use them. Colocation is an area where the UK has excelled and has lead Europe for a decade, but this could be set to change over the next twelve months.

It’s no secret to anyone that hosts an Internet service that prices have gone through the roof for small companies in the last twelve months, forcing many of the smaller hosters to just shut up shop. The knock-on effects of this will have a tremendous impact on the UK Internet and IT industries as it also impacts large content providers, content distribution companies such as Akamai, telecom companies and core Internet Exchange facilities such as LINX. In other words, pretty much every company involved in the delivering Internet services and applications.

We should be worried.

Estimated London rack pricing to 2007 (Source: theColocationexchange)

The core problem is that available co-location space is not just in short supply in London, it is simply disappearing at an alarming and accelerating rate as shown in the chart below (It is even worse in Dublin). It could easily run out to anyone who does not have a deep pocket.

Estimated space availability in London area (Source: theColocationexchange)

What is causing this crisis?

Here are some of the reasons.

London’s ever increasing power as a world financial hub: According to the London web site: “London is the banking centre of the world and Europe’s main business centre. It is the headquarters of more than 100 of Europe’s 500 largest companies are in London and a quarter of the of the world’s largest financial companies have their European headquarters in London too. The London foreign exchange market is the largest in the world, with an average daily turnover of $504 billion, more than the New York and Tokyo combined.”

This has been a tremendous success for the UK and driven a phenomenal expansion of financial companies needs for data centre hosting and they have turned to 3rd party colo providers to provide these needs. In particular, the need for disaster recovery has driven them to not only expand their own in-hose capabilities but to also place infrastructure in in 3rd party facilities. Colo companies have welcomed these prestigious companies with open arms in the face of the telecomms industry meltdown post 2001.

Sarbanes-Oxley compliance: The necessity for any company that operates in the USA to comply with the onerous Sarbanes-Oxley regulations has had a tremendous impact on the need to manage and audit the capture, storage, access, and sharing of company data. In practice, more analysis and more disk storage are needed leading to more colo space requirements.

No new colo build for the last five years: As in the telecommunications world, life was very difficult for colo operators in the five years following the new millennium. Massive investments in the latter half of the 1990s was followed by pretty much zero expansion of the industry which remained effectively in stasis. One exception to this is IX Europe who are expanding their facilities around Heathrow. However, builds such as this will not have any great impact on the market overall even though it will be highly profitable for the companies expanding.

However, in the last 24 months both the telecomms and the colo industries are seeing a boom in demand and a return to a buoyant market last seen in the late 1990s (Picture credit: theColocationexchange).

Consolidation: In London particularly, there has been a large trend to consolidation and roll-up of colo facilities. A prime example of this would be Telecity, (backed by private equity finance from 3i Group, Schroders and Prudential),  who have bought Redbus and Globix in the last twenty four months. This included a number of smaller colo operators that focused on supplying smaller customers. The now larger operators have really concentrated on winning the lucrative business of corporate data centre outsourcing which is seen to be highly profitable with long contract periods.

Facility absorption: In a similar way that many telecommunications companies were sold at very low prices between post 2000,  the same trend happened in the colo industry. In particular, many of the smaller colos west of London were bought at knock down valuations by carriers, large third party systems integrators and financial houses. This effectively took that colo space permanently off the market.

Content services: There has been a tremendous infrastructure growth in the last eighteen months by the well known media and content organisations. This includes all the well known names such as Google, Yahoo and Microsoft. It also includes companies delivering newer services such as IP TV and content distribution companies such as Akamai. It could be said with justification, that this growth is only just beginning and these companies are competing directly with the financial community, enterprises and carriers for what little colo space there is left.

Carrier equipment rooms: Most carriers have their own in-house colo facilities to house their own equipment or offer colo services to their customers. Few colos have invested in increasing in these facilities in the last few years so most are now 100% full forcing carriers to go to 3rd party colos for expansion.

Instant use: When enterprises buy space today they immediately use it rather then letting it lie fallow.

How has the Colo industry reacted to this ‘Colo 2.0’ spurt of growth?

With demand going through the roof and with a limited amount of space available in London, it is clearly a seller’s market. Users of colo facilities have seen rack prices increase at an alarming rate. For example: Colocation Price Hikes at Redbus.

However, the rack price graph above does not tell the whole story as power, which used to be a small additional charge or even thrown in for free, have risen by a factor of three or even four in the last twelve months.

Colos used to focus on selling colo space solely on the basis of rack footprints. However, the new currency they use is Amperes, not square feet measured in rack footprints. This is an interesting aspect that is not commonly understood by individuals who have not had to by buy space in the last twelve months.

This is caused because colo facilities are not only capped in the amount of space they have to place racks, they also have caps on the amount of electricity that a site can take from their local power companies. Also, as a significant percentage of this input power is turned into heating the hosted equipment, colo facilities have needed to make significant investment in coolers to keep the equipment operating within their temperature specifications. They also need to invest in appropriate back-up power generators and batteries to power the site in case of a external power failure.

Colo contracts are now principally based on the amount of current the equipment consumes, not its footprint. If the equipment in a rack only takes up a small space but consumes, say 8 to 10 Amps, then the rest of the rack has to remain empty unless you are willing pay an additional full-rack’s worth of power.

If a rack owner sub-lets shelf space in a rack to a number of their customers, each one has to be monitored with individual Ammeters placed on each shelf.

One colo explains this to their customers in a rather clumsy way:

“Price Rises Explained: Why has there been another price change?

By providing additional power to a specific rack/area of the data floor, we are effectively diverting power away from other areas of the facility, thus making that area unusable. In effect, every 8amps of additional power is an additional rack footprint.

The price increase reflects the real cost to the business of the additional power. Even with the increase, the cost per additional 8amps of power is still substantially less, almost half the cost of the standard price for a rack foot print including up to 8amp consumption.”

Another point to bear in mind here is the current that individual servers consume. With the sophisticated power control that is embedded into today’s servers – just like your home PC – there is a tremendous difference in the amount of current a server consumes in its idle state compared to full load. The amount of equipment placed in a rack is limited by the possible full load current consumption even if average consumption is less. In the case of an 8 Amp rack limit, there would also be a hard trip installed by the colo facility that turns the rack off if current reaches say 10 Amps.

If the equipment consists of standard servers or telecom switches this can be accommodated relatively easily, but if a company offers services such as interactive games or IPTV service and fills a rack with blade (card) servers, this can quite easily consume 15 to 20kW of power or 60 Amps! I’ll leave it you to undertake the commercial negotiations with your colo but take a big chequebook!

What could be the consequences of the up and coming crisis?

The empty floor as seen in the picture are long gone in colo facilities in London.

Virtual hosters caught in a trap: Clearly if a company does not own its own colo facilities but offers colo based services to their customers, it could prove to be very difficult and expensive  to guarantee access to sufficient space, sorry Amperage, to meet their customer’s growth needs in an exploding market. As in the semiconductor market where fabless companies are always the first hit in boom times, those companies that rely on 3rd party colos could have significant challenges facing them in coming months.

No low cost facilities will hit small hosting companies:  The issues raised in this post are significant ones even for multinational companies, but for small hosting companies they are killers. Many small hosting companies who supply SME customers have already put the shutters on their business as it has proved not to be cost effective to pass these additional costs on their customers.

Small Web 2.0 service companies and start-ups: The reduction in availability of  low-cost colo hosting could have a tremendous impact on small Web 2.0 service development companies where cash flow is always a problem. Many of these companies used to go to a “sell it cheap” colo but there are fewer and fewer of that can be resorted to. If small companies go to these lower cost colos then you can placing their services in potential jeopardy as the colo might have only one carrier fibre connection to the facility and if that goes down or no power back-up capabilities…

Its not so easy to access lower cost European facilities:  There is space available in some mainland European cities and at rates considerably lower than those seen in London. However, their possible use does raise some significant issues:

  • A connection needs to be paid for between the colo centres. If talking about multi Gbit/s bandwidths these does not come cheap. They also need to be backed up by at least a second link for resilience.
  • For real time applications – games or synchronous backup, the addition transit delays can prove to be a significant issue.
  • Companies will need local personal to support your facility and this can be very expensive and also represents an expensive and long term commitment  in many European counties.

I called this post a Crisis for the UK Internet and IT industry? I hope that the issues outlined do not have a measurable negative impact in the UK, but I’m really not sure that that will be the case. Even if there is a rush to build new facilities in 2007 it will take 18 to 24 months for them to come on line. If this trend continues, a lot of application and content service providers will be forced to provide them from Europe or the USA with a consequent set of knock-on effects for the UK.

I hope I’m wrong.

Note: I would like to acknowledge a presentation given by Tim Anker of the theColocationexchange at the Data Centres Europe conference in March 2007 which provided the inspiration for this post. Tim has been concerned about this issues and has been bringing to the attention of companies for the last two years.


March 26, 2007

And you thought Ethernet was simple! It seems I am following a little bit of an Ethernet theme at the moment, so I thought that I would have a go at listing all (many?) of the ways Ethernet packets can be moved from from one location to another. Personally I’ve always found this confusing as there seems to be a plethora of acronyms and standards. I will not cover wireless standards in this post.

Like IP (Internet protocol not Intellectual Property!), the characteristics of an Ethernet connection is only as good as the bearer service, it is being carried over and thus most of the standards are concerned with that aspect. Of course, IP is most often carried over Ethernet so performance characteristics of the Ethernet data path bleed through to IP as well. Aspects such as service resilience and Quality of Service (QoS) are particularly important.

Here are the ways that I have come across to transport Ethernet.

Native Ethernet

Native Ethernet in its original definition runs over twisted-pair, coaxial cables or fibre (Even though Metcalfe called their cables The Ether). A core feature called carrier sense multiple access with collision detection (CSMA/CD) enabled multiple computers to share the same transmission medium. Essentially this works by a node resending a packet when it did not arrive at its destination because it was lost by colliding with a packet sent from another node at the same time. This is one of the principle aspect of native Ethernet that is when used on a wide area basis as it is not needed.

Virtual LANs (VLANs): An additional capability to Ethernet was defined by the IEEE 802.1Q standard to enable multiple Ethernet segments in an enterprise to be bridged or interconnected sharing the same physical coaxial cable or fibre while keeping each bridge private. VLANs are focused on single administrative domain where all equipment configurations are planned and managed by a single entity. What is know as Q-in-Q (VLAN stacking) emerged as the de facto technique for preserving customer VLAN settings and providing transparency across a provider network.

IEEE 802.1ad (Provider Bridges) is an amendment to IEEE 802.1Q-1998 standard that the definition of Ethernet frames with multiple VLAN tags.

Ethernet in the First Mile (EFM): In June, 2004, the IEEE approved a formal specification developed by its IEEE 802.3ah task force. EFM focuses on standardising a number of aspects that will help Ethernet from a network access perspective. In particular it aims to provide a single global standard enabling complete interoperability of services. The standards activity encompasses: EFM over fibre, EFM over copper, EFM over passive optical network and Ethernet First Mile Operation, Administration, and Maintenance. Combined with whatever technology a carrier deploys to carry Ethernet over its core network, EFS enables full end-to-end Ethernet wide area services to be offered.

Over Dense Wave Division Multiplex (DWDM) optical networks

10GbE: The 10Gbit/s Ethernet standard was published in 2006 and offers full duplex capability by dropping CSMA/CD. 10GbE can be delivered over carrier’s DWDM optical network.


Ethernet over Sonet / SDH (EoS): For those carriers that have deployed SONET / SDH networks to support their traditional voice and TDM data services, EoS is a natural service to offer following a keep it simple approach as it does not involve tunnelling as would be needed using IP/MPLS as the transmission medium. Ethernet frames are encapsulated into SDH Virtual Containers. This technology is often preferred by customers as it does not involve the transmission of Ethernet via encapsulation over an IP or MPLS shared network which is often seen as a perceived performance or security risk by enterprises (I always see this as a non-logical concern as ALL public networks use shared networks at all levels).

Link Access Procedure – SDH (LAPS): LAPS is a variant of the original LAP protocol, is an encapsulation scheme for Ethernet over SONET/SDH. LAPS provides a point-to-point connectionless service over SONET/SDH. and enables the encapsulation of IP and Ethernet data.

Over IP and MPLS:

Layer 2 Tunnelling Protocol (L2TP). L2TP was originally standardised in 1999 but an updated version was published in 2005- L2TPv3. L2TP is a Layer 2 data-link protocol that enables data link protocols to be carried on IP networks along side PPP. This includes Ethernet, frame relay and ATM. L2TPv3 is essentially a point-to-point tunnelling protocol that is used to interconnect single- domain enterprise sites.

L2TPv3 is also known as a Virtual Private Wire service (VPWS) and is aimed at native IP networks. As it is a pseudowire technology it is grouped with Any Transport over MPLS (AToM).

Layer 2 MPLS VPN (L2VPN): Customer’s networks are separated from each other on a shared MPLS network using MPLS Label Distribution Protocol (LDP) to set up point-to-point Pseudo Wire Ethernet links. The picture below shows individual customer sites that are relatively near to each other connected by L2TPv3 or L2VPN tunnelling technology based on MPLS Label Switched paths.

Virtual Private LAN Service (VPLS): A VPLS is a method of providing a fully meshed multipoint wide area Ethernet service using Pseudo Wire tunnelling technology. VLPS is a Virtual Private Network (VPN) that enables all LANs on a customer’s premises connected to it are able to communicate with each other. A new carrier that has invested in an MPLS network rather than an SDH / SONET core network would use VPLS to offer Ethernet VPNs to their customers. The picture below shows a VPLS with LSP link containing multiple MPLS Pseudo-Wires tunnels.

MEF: The MEF defines several types of Virtual Private Wire Services (VPWS) services:

Ethernet Private Line (EPL). An EPL service supports a single Ethernet VC (EVC) between two customer sites.

Ethernet Virtual Private Line (EVPL). An EVPL service supports multiple EVCs between two two customer sites.

Virtual Private Line Service (VPLS) or Ethernet LAN (E-LAN) service supports multiple EVCs between multiple customer sites.

These MEF-created service definitions, which are not standards as such (indeed they are independent of standards), enable equipment vendors and service providers to achieve 3rd certification for their products.

Looking forward:

100GbE: In 2006, the IEEE’s Higher Speed Study Group (HSSG), tasked with exploring what Ethernet’s next speed might be, voted to pursue 100G Ethernet over other offerings, such as 40Gbit/s Ethernet to be delivered in the 2009 /10 time frame. The IEEE will work to standardize 100G Ethernet over distances as far as 6 miles over single-mode fiber optic cabling and 328 feet over multimode fibre.

PBT or PBB-TE: PBT is a group of enhancements to Ethernet that are defined in the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) group. I’ve covered this in Ethernet goes carrier grade with PBT / PBB-TE?

T-MPLS: -MPLS is a recent derivative of MPLS – I have covered this in PBB-TE / PBT or will it be T-MPLS?

Well, I hope I’ve covered most of the Ethernet wide area transmission standards activities here. If I havn’t I’ll add others as addendums. At least they are all on one page!

Islands of communication or isolation?

March 23, 2007

One of the fundamental tenets of the communication industry is that you need 100% compatibility between devices and services if you want to communicate. This was clearly understood when the Public Switched Telephone Network (PSTN) was dominated by local monopolies in the form of incumbent telcos. Together with the ITU, they put considerable effort into standardising all the commercial and technical aspects of running a national voice telco.

For example, the commercial settlement standards enabled telcos to share the revenue from each and every call that made use of their fixed or wireless infrastructure no matter whether the call originated, terminated or transited their geography. Technical standards included everything from compression through to transmission standards such as Synchronous Digital Hierarchy (SDH) and the basis of European mobile telephony, GSM. The IETF’s standardisation of the Internet has brought a vast portion of the world’s population on line and transformed our personal and business lives.

However, standardisation in this new century is now often driven as much by commercial businesses and business consortiums which often leads to competing solutions and standards slugging it out in the market place (e.g. PBB-TE and T-MPLS). I guess this is as it should be if you believe in free trade and enterprise. But, as mere individuals in this world of giants, these issues can cause us users real pain.

In particular, the current plethora of what I term islands of isolation means that we often unable to communicate in ways that we wish to. In the ideal world, as exemplified by the PSTN, you are able to talk to every person in the world that owns a phone as long as you know their number. Whereas, many, if not most, new media communications services we choose to use to interact with friends and colleagues are in effect closed communities that are unable to interconnect.

What are the causes these so-called islands of isolation? Here are a few examples.

Communities: There are many Internet communities including free PC-to-PC VoIP services, instant messaging services, social or business networking services or even virtual worlds. Most of these focus on building up their own 100% isolated communities. Of course, if one achieves global domination, then that becomes the de facto standard by default. But, of course, that is the objective of every Internet social network start-up!

Enterprise software: Most purveyors of proprietary enterprise software thrive on developing products that are incompatible. Lotus Notes and Outlook email systems was but one example. This is often still the case today when vendors bolt advanced features onto the basic product that are not available to anyone not using that software – presence springs to mind. This creates vendor communities of users.

Private networks: Most enterprises are rightly concerned about security and build strong protective firewalls around their employees to protect themselves from malicious activities. This means that employees of that company have full access to their own services but these are not available to anyone outside of the firewall for use on an inter-company basis. Combine this with the deployment of vendor specific enterprise software described about and you create lots of isolated enterprise communities!

Fixed network operators: It’s a very competitive world out there and telcos just love offering value-added features and services that are only offered to their customer base. Free proprietary PC-PC calls come to mind and more recently, video telephones.

Mobile operators: A classic example with wireless operators was the unwillingness to provide open Internet access and only provide what was euphemistically called ‘walled garden’ services – which are effectively closed communities.

Service incompatibilities: A perfect example of this was MMS, the supposed upgrade to SMS. Although there was a multitude of issues behind the failure of MMS, the inability to send an MMS to a friend who used another mobile network was one of the principle ones. Although this was belatedly corrected, it came too late to help.

Closed garden mentality: This idea is alive and well amongst mobile operators striving to survive. They believe that only offering approved services to their users is in their best interests. Well, no it isn’t!

Equipment vendors: Whenever a standards body defines a basic standard, equipment vendors nearly always enhance the standard feature set with ‘rich’ extensions. Of course, anyone using an extension could not work with someone who was not! The word ‘rich’ covers a multiplicity of sins.

Competitive standards: Users groups who adopt different standards become isolated from each other – the consumer and music worlds are riven by such issues.

Privacy: This is seen as such an important issue these days that many companies will not provide phone numbers or even email addresses to a caller. If you don’t know who you want, they won’t tell you! A perfect definition of a closed community!

Proprietary development:  In the absence of standards companies will develop pre-standard technologies and slug it out in the market. Other companies couldn’t care less about standards and follow a proprietary path just because they can and have the monopolistic muscle to do so. Bet – you can name one or two of those!

One take away from all this is that in the real world you can’t avoid islands of isolation and all of us have to use multiple services and technologies to interact with colleagues that are effectively islands of isolation and will probably remain so for the indefinite future in the competitive world we live in.

Your friends, family and work colleagues, by their own choice, geography and lifestyle, probably use a completely different set of services to yourself. You may use MSN, while colleagues use AOL or Yahoo Messenger. You may choose Skype but another colleague may use BT Softphone.

There are partial attempts at solving these issues with a subset of islands, but overall this remains a major conundrum that limits our ability to communicate at any time, any place and any where. The cynic in me says that if you hear about any product or initiative that relies on these islands of isolation disappearing to succeed I would run a mile – no ten miles! On the other hand, it could be seen as the land of opportunity?

PBB-TE (PBBTE) / PBT or will it be T-MPLS (MPLS-TP)?

March 22, 2007

Welcome to more acronym hell. In Ethernet goes carrier grade with PBT? I looked at the history of Ethernet and its increasing use in wide area networks. A key aspect of that was for standardisation bodies to provide the additional capabilities to make Ethernet ‘carrier grade’ by creating a connection-oriented Ethernet with improved scalability, reliability and simplified management. (Picture credit: Alcatel)As is the want of the network industry, not only are commercial network equipment manufacturers extremely competitive (through necessity) but the the standards bodies are as well. So we have not only the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) but also the ITU‘s Transport-MPLS (T-MPLS) activities competing in a similar space. An ITU technical overview presentation can be found here.

T-MPLS is a recent derivative of MPLS and is being defined in cooperation with the IETF. One way of looking at this could be through the now confused term layers. IP clearly operates at layer-3 while MPLS itself has been said to operate at layer-2.5 as it does not operate at a layer-2 transport level. While T-MPLS has been specifically designed to operate at the layer-2 transport level an area that the ITU focus on.

I guess the simplest way of looking at this is that T-MPLS is a stripped down MPLS standard whereby irrelevant components concerned with MPLS’ support of IP connectionless service capabilities. T-MPLS is also based on the extensive standardisation work already undertaken and implemented in SDH / SONET thus representing a marriage of MPLS and SDH. The main components of T-MPLS were approved as recenly as November 2006.

The motivation behind T-MPLS is is to provide a compatible transport network that is able to appropriately support the needs of a fully converged NGN IP network while at the same time support the on-going technology enhancements taking place in the optical layer. Because MPLS has gained such popularity in the last few years, it only seems natural to enhance something was accepted, understood and more importantly is now deployed by most carriers.

So what is T-MPLS?

T-MPLS operates at the layer-2 data plane level i.e. underneath MPLS or IP/MPLS. It borrows many of the characteristics and capabilities of IETF’s MPLS but focuses on the additional aspects that address the need for any transport layer to provide what is known as high availability i.e. greater than 99.999%. Some of the additions are around the following areas:

  • Clear management and control of bandwidth allocation using MPLS’s Label Switched Paths (LSPs)
  • Improved control of a the transport layer’s operational state through SDH-like OA&M (operations, administration, and maintenance) used for administering, and maintaining the network.
  • Improved and new network an survivability mechanisms such as protection and restoration as seen in SDH

Another key aspect, as seen in PBB-TE, is the complete separation of the control and data planes creating full flexibility for network management and signaling that will take place in the control plane. This signalling is known as generalised multi-protocol label switching (GMPLS) (also known as Automatic switched-transport network [ASTN]) and provided the same capabilities as seen in the tools used today to manage optical networks.

T-MPLS has been designed to run over an optical transport hierarchy (OTH) or an SDH / SONET network. I assume the OTH terminology has been adopted retrospectively to make it compatible with SDH in the same way Plesiochronous Digital Hierarchy (PDH) was invented to cover pre-SDH technology 15 years ago. The reason for SDH / SONET support is clear as these transport technologies are not about to go away after the significant investments made by carriers over the last 15 years. We should also not forget that the mobile world is still mainly a time division multiplexed (TDM) world and SDH / SONET provides a bridge for those carriers in both market areas or interconnecting with mobile operators. (Picture credit: Alcatel)

Although T-MPLS has been defined as a generic transport standard, its early focus is clearly centred on Ethernet bringing it into potential competition with PBT / PBB-TE. It will be most interesting to see how this competition pans out.

What are differences between T-MPLS and MPLS?

Although T-MPLS is a subset of MPLS there are several enhancements. The principle on of these is T-MPLS’ ability to support bi-directional LSPs. MPLS LSPs are unidirectional and both the forward and backward paths between nodes need to be explicitly defined. Conventional transport paths are bi-directional.

The following MPLS features have been dropped in T-MPLS:

  • Equal Cost Multiple Path (ECMP): Traffic can follow two paths that have the same ‘cost’ and is not needed in a connection oriented optical world. This is a problem in MPLS as well as it provides an element of non-predictability that affects traffic engineering activities.
  • LSP merging option: This is a real problem in MPLS anyway as traffic can be merged from multiple LSPs into a single one and losing source information. Point-2-Multi-point (P2MP) are particularly badly affected.
  • Penultimate Hop Popping (PHP): Labels are removed one node before the egress node to reduce the processing power required on the egress node. A legacy issue caused by underpowered routers that is not of concern today.

There are many other issues that need to be resolved before T-MPLS can become a robust standard set that is ready for wide scale deployment:

  • Interoperability: Interoperability between MPLS and T-MPLS control planes. There is lots of issues in this space and most activities are at an early stage.
  • Application interface: One of the main reasons ATM failed was that the majority of applications needed to be adapted to utilise ATM and it just did not happen. Although the problem with T-MPLS is limited to management tool APIs and interfaces, there are a lot of software companies that will need to undertake a lot of work to support T-MPLS. This will be quite a challenge!

T-MPLS’ vision is similar to that of PBB-TE and encompasses high scalability, reduced OPEX costs, handle any packet service, strong security, high availability, high QoS, simple management, and high resiliency.

The drive to T-MPLS has not only been driven by the need to upgrade optical network management but also by the the realisation that traditional MPLS- based networks have inherited the IP characteristics of being expensive to manage from an OPEX perspective and very difficult to manage on a large scale.

This has made most carriers rather jittery. On one hand they need to follow the industry gestalt of everything-over-IP based on the assumption it will all be cheaper one day as well as enabling them to provide the multiplicity of services wanted by their customers. On the other hand, the principle technology being used to deliver this vision, MPLS, is turning out to be more expensive to manage than the legacy networks MPLS is replacing. Quite a conundrum I think and one of the principle components driving the interest in KISS Ethernet services.

The diagram below shows the ages of transport ‘culminating’ in T-MPLS!

Transport Ages (Picture credit: TPACK)

A good overview of T-MPLS can be read courtesy of TPACK.
A side by side comparison of TBB-TE and T-MPLS from Meriton
Addendum: One of the principle industry groups promoting and supporting carrier grade Ethernet is the Metro Ethernet Forum (MEF) and in 2006 they introduced their official certification programme. The certification is currently only availably to MEF members – both equipment manufacturers and carriers – to certify that their products comply with the MEF’s carrier Ethernet technical specifications. There are two levels of certification:

MEF 9 is a service-oriented test specification that tests conformance of Ethernet Services at the UNI inter-connect where the Subscriber and Service Provider networks meet. This represents a good safeguard for customers that the Ethernet service they are going to buy will work! Presentation or High bandwidth stream overview

MEF 14: Is a new level of certification that looks at Hard QoS which is a very important aspect of service delivery not covered in MEF 9. MEF 14 provides a hard QoS backed by Service Level Specifications for Carrier Ethernet Services and hard QoS guarantees on Carrier Ethernet business services to their corporate customers and guarantees for triple play data/voice/video services for carriers. Presentation.

Addendum #1: Ethernet-over-everything – what’s everything?

Addendum #2: Enabling PBB-TE – MPLS seamless services