sip, Sip, SIP – Gulp!

May 22, 2007

Session Initiation Protocol or ‘SIP’ as it is known has become a major signalling protocol in the IP world as it lies at the heart of Voice-over-IP (VoIP). It’s a term you can hardly miss as it is supported by every vender of phones on the planet (Picture credit: Avaya: An Avaya SIP phone).

Many open software groups have taken SIP to the heart of their initiatives and an example of this is IP Multimedia Subsystem (IMS) which I recently touched upon in IP Multimedia Subsystem or bust!

SIP is a real-time IP applications layer protocol that sits alongside HTTP, FTP, RTP and other well known protocols used to move data through the Internet. However it is an extremely important one because it enables SIP devices to discover, negotiate, connect and establish communication sessions with other SIP enabled devices.

SIP was co-authored in 1996 by Jonathan Rosenberg who is now a Cisco Fellow, Henning Schulzrinne who is Professor and Chair in the Dept. of Computer Science at Columbia University and Mark Handley who is Professor of Networked Systems at UCL. SIP became an IETF SIP Working Group which is still supporting the RFC 3261 standard. SIP was originally used on the US experimental Multicast network commonly known as Mbone. This makes SIP an IT /IP standard rather than one developed by the communications industry.

Prior to SIP, voice signalling protocols were essentially proprietary signalling protocols aimed at use by the big telecommunications companies on their big Public Switched Telecommunications Networks (PSTN) voice networks such as SS7 (C7 in the UK). With the advent of the Internet and the ‘invention’ of Voice over IP, it soon became clear that a new signalling protocol was required that was peer-to-peer, scalable, open, extensible, lightweight and simple in operation that could be used on a whole new generation of real-time communications devices and services that are running over the Internet.

SIP itself is based on earlier IETF / Internet standards, principally Hypertext Transport Protocol (HTTP) which is the core protocol behind the World Wide Web.

Key features of SIP

The SIP signalling standard has many key features:

Communications device identification: SIP supports a concept known as Address of Record (AOR) which represents a user’s unique address in the world of SIP communications. An example of an AOR is sip: xxx@yyy.com. To enable a user to have multiple communications devices or services, SIP has a mechanism called a Uniform resource Identifier (URI). A URI is like the Uniform Resource Locator (URL) used to identify servers on the world wide web. URIs can be used to specify the destination device of a real-time session e.g.

  • IM: sip: xxx@yyy.com (Windows Messenger uses SIP)
  • Phone: sip: 1234 1234 1234@yyy.com; user=phone
  • FAX: sip: 1234 1234 1235@yyy.com; user=fax

A SIP URI can use both traditional PSTN numbering schemes AND alphabetic schemes as used on the Internet.

Focussed function: SIP only manages the set up and tear down of real time communication sessions, it does not manage the actual transport of media data. Other protocols undertake this task.

Presence support: SIP is used in a variety of applications but has found a strong home in applications such as VoIP and Instant Messaging (IM). What makes SIP interesting is that it is not only capable of setting up and tearing down real time communications sessions but also supports and tracks a user’s availability through the Presence capability. The open presence standard Jabber uses SIP. I wrote about presence in – The magic of ‘presence’.

Presence is supported through a key SIP extension: SIP for Instant messaging and Presence Leveraging Extensions (SIMPLE) [a really contrived acronym!]. This allows a user to state their status as seen in most of the common IM systems. AOL Instant Messenger is shown in the picture on the left.

SIMPLE means that the concept of Presence can be used transparently on other communications devices such as mobile phones, SIP phones, email clients and PBX systems.

User preference: SIP user preference functionality enables a user to control how a call is handled in accordance to their preferences. For example:

  • Time of day: A user can take all calls during office hours but direct them to a voice mail box in the evenings.
  • Buddy lists: Give priority to certain individuals according to a status associated with each contact in an address book.
  • Multi-device management: Determine which device / service is used to respond to a call from particular individuals.

PSTN mapping: SIP can manage the translation or mapping of conventional PSTN numbers to SIP URIs and vice versa. This capability allows SIP sessions to transparently inter-work with the PSTN. There are organisations, such as ENUM, who provide appropriate database capabilities. To quote ENUM’s home page:

“ENUM unifies traditional telephony and next-generation IP networks, and provides a critical framework for mapping and processing diverse network addresses. It transforms the telephone number—the most basic and commonly-used communications address—into a universal identifier that can be used across many different devices and applications (voice, fax, mobile, email, text messaging, location-based services and the Internet).”

SIP trunking: SIP trunks enable enterprises to group inter-site calls using a pure IP network. This could use an IP-VPN over an MPLS-based network with a guaranteed Quality of Service. Using SIP trunks could lead to significant cost saving when compared to using traditional E1 or T1 leased lines.

Inter-island communications: In a recent post, Islands of communication or isolation? I wrote about the challenges of communication between islands of standards or users. The adoption of SIP-based services could enable a degree of integration with other companies to extend the reach of what, to date, have been internal services.

Of course, the partner companies need to have adopted SIP as well and have appropriate security measures in place. This is where the challenge would lay in achieving this level of open communications! (Picture credit: Zultys: a Wi-Fi SIP phone)

SIP servers

SIP servers are the centralised capability that manage establishment of communications sessions by users. Although there are many types of server, they are essentially only software processes and could be run on a single processor or device. There are several types of SIP server:

Registrar Server: The registrar server authenticates and registers users as soon as they come on-line. It stores identities and the list of devices in use by each user.

Location Server: The location server keeps track of users’ locations as they roam and provides this data to other SIP servers as required.

Redirect Server: When users are roaming, the Redirect Server maps session requests to a server closer to the user or an alternate device.

Proxy Server: SIP Proxy servers pass on SIP requests that are located either downstream or upstream.

Presence Server: SIP presence servers enable users to provide their status (presentities) to other users who would like to see it (Watchers).

Call setup Flow

The diagram below shows the initiation of a call from the PSTN network (section A), connection (section B) and disconnect (section C). The flow is quite easy to understand. One of the downsides is that if a complex session is being set up it’s quite easy to get up to 40 to 50+ separate transactions which could lead to unacceptable set-up times being experienced – especially if the SIP session is being negotiated across the best-effort Internet.

(Picture source: NMS Communications)

Round-up

As a standard SIP has had a profound impact on our daily lives and lives well along those other protocol acronyms that have fallen into the daily vernacular such as IP, HTTP, www and TCP. Protocols that operate at the application level seem to be so much more relevant to our daily lives than those that are buried in the network such as MPLS and ATM.

There is still much to achieve by building capability on top of SIP such as federated services and more importantly interoperability. Bodies working on interoperability are SIPcenter, SIP Forum, SIPfoundry, SIP’it and IETF’s SPEERMINT working group. More fundamental areas under evaluation are authentication and billing.

More depth information about SIP can be found at http://www.tech-invite.com, a portal devoted to SIP and surrounding technologies.

Next time you just buy a SIP Wi-Fi phone from your local shop, install it, find that it works first time AND saves you money, just think about all the work that has gone into creating this software wonder. Sometimes, standards and open software hit a home run. SIP is just that.

Adendum #1:Do you know your ENUM?


IP Multimedia Subsystem or bust!

May 10, 2007

I have never felt so uncomfortable about writing about a subject as I am now while contemplating IP Multimedia Subsystem (IMS). Why this should be I’m not quite sure.

Maybe it’s because one of the thoughts it triggers is the subject of Intelligent Networks (IN) that I wrote about many years ago – The Magic of Intelligent Networks. I wrote at the time:

“Looking at Intelligent Networks from an Information Technology (IT) perspective can simplify the understanding of IN concepts. Telecommunications standards bodies such as CCITT and ETSI have created a lot of acronyms which can sometimes obfuscate what in reality is straightforward.”

This was an initiative to bring computers and software to the world voice switches that would enable carriers to develop advanced consumer services on their voice switches and SS7 signalling networks. To quote an old article:

“Because IN systems can interface seamlessly between the worlds of information technology and telecommunications equipment, they open the door to a wide range of new, value added services which can be sold as add-ons to basic voice service. Many operators are already offering a wide range of IN-based services such as non-geographic numbers (for example, freephone services) and switch-based features like call barring, call forwarding, caller ID, and complex call re-routing that redirects calls to user-defined locations.”

Now there was absolutely nothing wrong with that vision and the core technology was relatively straightforward (database lookup number translation). The problem in my eyes was that it was presented as a grand take-over-the-world strategy and a be-all-and-and-all vision when in reality it was a relatively simple idea. I wouldn’t say IN died a death, it just fizzled out. It didn’t really disappear as such, as most of the IN related concepts became reality over time as computing and telephony started to merge. I would say it morphed into IP telephony.

Moreover, what lay at the heart of IN was the view that intelligence should be based in the network, not in applications or customer equipment. The argument about dumb networks versus Intelligent networks goes right back to the early 1990s and is still raging today – well at least simmering.

Put bluntly, carriers laudably want intelligence to be based in the network so they are able to provide, manage and control applications and derive revenue that will compensate for plummeting Plain Old Telephony Services (POTS) services. Whereas most IT and Internet people do not share this vision as they believe it holds back service innovation which generally comes from small companies. There is a certain amount of truth in this view as there are clear examples of where this is happening today if we look at the fixed and mobile industries.

Maybe I feel uncomfortable with the concept of IMS as it looks like the grandchild of IN. It certainly seems to suffer from the same strengths and weaknesses that affected its progenitor. Or, maybe it’s because I do not understand it well enough?

What is IP Multimedia Subsystem (IMS)?

IMS is an architectural framework or reference architecture – not a standard – that provides a common method for IP multiple media ( I prefer this term to multimedia) services to be delivered over existing terrestrial or wireless networks. In the IT world – and the communications world come to that – a good part of this activity could be encompassed using the term middleware. Middleware is an interface (abstraction) layer that sits between the networks and applications / services that provides a common Application Programming Interface (API).

The commercial justification of IMS is to enable the development of advanced multimedia applications whose revenue would compensate for dropping telephony revenues and the reduce customer churn.

The technical vision of IMS is about delivering seamless services where customers are able to access any type of service, from any device they want to use, with single sign-on, with common contacts and fluidity between wire line and wireless services. IMS has ambitions about delivering:

  • Common user interfaces for any service
  • Open application server architecture to enable a ‘rich’ service set
  • Separate user data from services for cross service access
  • Standardised session control
  • Inherent service mobility
  • Network independence
  • Inter-working with legacy IN applications

One of the comments I came across on the Internet from a major telecomms equipment vendor was that IMS was about the “Need to create better end-user experience than free-riding Skype, Ebay, Vonage, etc.”. This, in my opinion, is an ambition too far as innovative services such as those mentioned generally do not come out of the carrier world.

Traditionally each application or service offered by carriers sit alone in their own silos calling on all the resources they need, using proprietary signalling protocols, and running in complete isolation to other services each of which sit in their own silo. In many ways this reflects the same situation that provided the motivation to develop a common control plane for data services called GMPLS. Vertical service silos will be replaced with horizontal service, control and transport layers.


Removal of service silos
Source: Business Communications Review, May 2006

As with GMPLS, most large equipment vendors are committed to IMS and supply IMS compliant products. As stated in the above article:

“Many vendors and carriers now tout IMS as the single most significant technology change of the decade… IMS promises to accelerate convergence in many dimensions (technical, business-model, vendor and access network) and make “anything over IP and IP over everything” a reality.

Maybe a more realistic view is that IMS is just an upgrade to the softswitch VoIP architecture outlined in the 90s – albeit being a trifle more complex. This is the view of Bob Bellman, in an article entitled From Softswitching To IMS: Are We There Yet? Many of the  core elements of a softswitch architecture are to be found in the IMS architecture including the separation of the control and data planes.

VoIP SoftSwitch Architecture
Source: Business Communications Review, April 2006

Another associated reference architecture that is aligned with IMS and is being popularly pushed by software and equipment vendors in the enterprise world is Service Oriented Architecture (SOA) an architecture that focuses on services as the core design principle.

IMS has been developed by an industry consortium and originated in the mobile world in an attempt to define an infrastructure that could be used to standardise the delivery of new UMTS or 3G services. The original work was driven by 3GPP2 and TISPAN. Nowadays, just about every standards body seems to be involved including Open Mobile Alliance, ANSI, ITU, IETF, Parlay Group and Liberty Alliance – fourteen in total.

Like all new initiatives, IMS has developed its own mega-set of of T/F/FLAs (Three, four and five letter acronyms) which makes getting to grips with the architectural elements hard going without a glossary. I won’t go into this much here as there are much better Internet resources available: The reference architecture focuses on a three layer model:

#1 Applications layer:

The application layer contains Application Servers (AS) which host each individual service. Each AS communicated to the control plane using Session Initiation Protocol (SIP).  Like GSM, an AS can interrogate a database of users to check authorisation. The database is called the Home Subscriber Server (HSS) or an HSS in a 3rd party network if the user is roaming 9In GSM this is called the Home Location Register (HLR).

(Source: Lucent Technologies)

The application layer also contains Media Servers for storing and playing announcements and other generic applications not delivered by individual ASs, such as media conversion.

Breakout Gateways provide routing information based on telephone number looks-ups for services accessing a PSTN. This is similar functionality to that was found in IN systems discussed earlier.

PSTN gateways are used to interface to PSTN networks and include signalling and media gateways.

#2 Control layer:

The control plane hosts the HSS which is the master database of user identities and the individual calls or service sessions currently being used by each user. There are several roles that a SIP call / session controller can undertake:

  • P-CSCF (Proxy-CSCF) This provides similar functionality as a proxy server in an Intranet
  • S-CSCF (Serving-CSCF) This is the core SIP server always located in the home node
  • I-CSCF (Interrogating-CSCF) This is a SIP server located at a network’s edge and it’s address can be found in DNS servers by 3rd party SIP servers.

#3 Transport layer:

IMS encompasses any services that uses IP / MPLS as transport and pretty much all of the fixed and mobile access technologies including ADSL, cable modem DOCSIS, Ethernet, Wi-Fi, WIMAX and CDMA wireless. It has little choice in this matter as if IMS is to be used it needs to incorporate all of the currently deployed access technologies. Interestingly, as we saw in the DOCSIS post – The tale of DOCSIS and cable operators, IMS is also focusing on the of IPv6 with IPv4 ‘only’ being supported in the near term.

Roundup

IMS represents a tremendous amount of work spread over six years and uses as many existing standards as possible such as SIP and Parlay. IMS is work in progress and much still needs to be done – security and seamless inter-working of services are but two.

All the major telecommunications software, middleware and integrators are involved and just thinking about the scale of the task needed to put in place common control for a whole raft of services makes me wonder about just how practical the implementation of IMS actually is? Don’t take me wrong, I am a real supporter of the these initiatives because it is hard to come up with an alternative vision that makes sense, but boy I’m glad that I’m not in charge of a carrier IMS project!

The upsides of using IMS in the long term are pretty clear and focus around lowering costs, quicker time to market, integration of services and, hopefully, single log-in.

It’s some of the downsides that particularly concern me:

  • Non-migration of existing services: Like we saw in the early days of 3G, there are many services that would need to come under the umbrella of an IMS infrastructure such as instant conferencing, messaging, gaming, personal information management, presence, location based services, IP Centrex, voice self-service, IPTV, VoIP and many more. But, in reality, how do you commercially justify migrating existing services in the short term onto a brand new infrastructure – especially when that infrastructure is based on a non-completed reference architecture?

    IMS is a long term project that will be redefined many times as technology changes over the years. It is clearly an architecture that represents a vision for the future that can be used to guide and converge new developments but it will many years before carriers are running seamless IMS based services – if they ever will.

  • Single vendor lock-in: As with all complicated software systems, most IMS implementations will be dominated by a single equipment supplier or integrator. “Because vendors won’t cut up the IMS architecture the same way, multi-vendor solutions won’t happen, Moreover, that single supplier is likely to be an incumbent vendor.” This was quoted by Keith Nissen from InStat in a BCR article.
  • No launch delays: No product manager would delay the launch of a new service on the promise of jam tomorrow. While the IMS architecture is incomplete, services will continue to be rolled out without IMS further inflaming the Non-migration of existing services issue raised above.
  • Too ambitious: Is the vision of IMS just too ambitious? Integration of nearly every aspect of service delivery will be a challenge and a half for any carrier to undertake. It could be argued that while IT staff are internally focused getting IMS integration sorted they should be working on externally focused services. Without these services, customers will churn no matter how elegant a carrier’s internal architecture may be. Is IMS, Intelligent Networks reborn to suffer the same fate?
  • OSS integration: Any IMS system will need to integrate with carrier’s often proprietary OSS systems. This compounds the challenge of implementing even a limited IMS trial.
  • Source of innovation: It is often said that carriers are not the breeding ground of new, innovative services. This lies with small companies on the Internet creating Web 2.0 services that utilise such technologies as presence, VoIP and AJAX today. Will any of these companies care whether a carrier has an IMS infrastructure in place?
  • Closed shops – another walled garden?: How easy will it be for external companies to come up with a good idea for a new service and be able to integrate with a particular carrier’s semi-proprietary IMS infrastructure?
  • Money sink: Large integration projects like IMS often develop a life of their own once started and can often absorb vast amounts of money that could be better spent elsewhere.

I said at the beginning of the post that I felt uncomfortable about writing about IMS and now that I’m finished I am even more uncomfortable. I like the vision – how could I not? It’s just that I have to question how useful it will be at the end of the day and does it divert effort, money and limited resource away from where they should be applied – on creating interesting services and gaining market share. Only time will tell.

Addendum:  In a previous post, I wrote about the IETF’s Path Computation Element Working Group and it was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?


The tale of DOCSIS and cable operators

May 2, 2007

When anyone that uses the Internet on a regular basis is presented with an opportunity to upgrade their access speeds they will usually jump at the opportunity without a second thought. There used to be a similar analogy with personal computers with operating systems and processor speeds, but this is a less common trend these days as the benefits to be gained are often ephemeral as we have recently seen with Microsoft’s Vista. (Picture: SWINOG)

However, the advertising headline for many ISPs still focuses on “XX Mbit/s for as little as YY Pounds/month”. Personally, in recent years, I have not seen too many benefits in increasing my Internet access speed because I see little improvement when browsing normal WWW sites as their performance are not now bottlenecked by my access connection but rather the performance of servers. My motivation to get more bandwidth into my home is the need to have sufficient bandwidth – both upstream and downstream – to support my family’s need to use multiple video and audio services at the same time. Yes, we are as dysfunctional as everyone else with computers in nearly every room of the house and everyone wanting to do their own video or interactive thing.

I recently posted an overview of my experience of Joost, the new ‘global’ television channel recently launched by Skype founders, Niklas Zennstrom and Janus Friis – Joost’s beta – first impressions and it’s interesting to note that as a peer-to-peer system it does require significant chunks of your access bandwidth as discussed in Joost: analysis of a bandwidth hog.

The author’s analysis shows that it “pulls around 700 kbps off the internet and onto your screen” and “sends a lot of that data on to other users – about 220 kbps upstream”. If Joost is a window on the future of the IPTV on the Internet, then its should be of concern to the ISP and carrier communities and it should also be of concern to each of us that uses it. 220kbits/s is a good chunk of of the 250kbit/s upstream capability of ADSL-based broadband connections. If the upstream channel is clogged, response time on all services being accessed will be affected. Even more so if several individuals are are access Joost of a single broadband connection.

It’s these issues that make me want to upgrade my bandwidth and think about the technology that I could use to access the Internet. In this space there has been an on-going battle for many years between twisted copper pair ADSL or VDSL used by incumbent carriers and cable technology used by competitive cable companies such as Virgin Media to deliver Internet to your home.

Cable TV networks (CATV) have come a long way since the 60s when they were based on simple analogue video distribution over coaxial cable. These days they are capable of delivering multiple services and are highly interactive allowing in-band user control of content unlike satellite delivery that requires a PSTN based back-channel. The technical standard that enables these services is developed by CableLabs and is called Data Over Cable Service Interface Specification (DOCSIS). This defines the interface requirements for cable modems involved in high-speed data distribution over cable television system networks.

The graph below shows the split between ADSL and Cable based broadband subscribers: (Source: Virgin Media) with Cable trailing ADSL to a degree. The link provided provides an excellent overview of the UK broadband market in 2006 so I won’t comment further here.

A DOCSIS based broadband cable system is able to deliver a mixture of MPEG-based video content mixed with IP enabling the provision of a converged service as required in 21st century homes. Cable systems operate in a parallel universe, well not quite, but they do run a parallel spectrum enclosed within their cable network isolated from the open spectrum used by terrestrial broadcasters. This means that they are able to change standards when required without the need to consider other spectrum users as happens with broadcast services.

The diagram below shows how the spectrum is split between upstream and downstream data flows (Picture: SWINOG) and various standards specify the data modulation (QAM) and bit-rate standards. As is usual in these matters, there are differences between the USA and European standards due to differing frequency allocations and standards – NTSC in the USA and PAL in Europe. Data is usually limited to between 760 and 860MHz.

The DOCSIS standard has been developed by CableLabs and the ITU with input from a multiplicity of companies. The customer premises equipment is called a Cable Modem and the Central Office (Head End) equipment is called the a cable modem termination system (CMTS).

Since 1997there have been various releases (Source: CableLabs) of the DOCSIS standard with the most recent being version 3.0 being released in 2006.

DOCSIS 1.0 (Mar. 1997) (High Speed Internet Access) Downstream: 42.88 Mbit/s and Upstream: 10.24 Mbit/s

  • Modem price has declined from $300 in 1998 to <$30 in 2004

DOCSIS 1.1 (Apr. 1999) (Voice, Gaming, Streaming)

  • Interoperable and backwards-compatible with DOCSIS 1.0
  • “Quality of Service”
  • Service Security: CM authentication and secure software download
  • Operations tools for managing bandwidth service tiers

    DOCSIS 2.0 (Dec. 2001) (Capacity for Symmetric Services) Downstream: 42.88 Mbit/s and Upstream:30.72 Mbit/s

    • Interoperable and backwards compatible with DOCSIS 1.0 / 1.1
    • More upstream capacity for symmetrical service support
    • Improved robustness against interference (A-TDMA and S-CDMA)

    DOCSIS 3.0 (Aug. ’06) Downstream: 160 Mbit/s and Upstream: 120 Mbit/s

    • Wideband services provided by expanding used bandwidth through the use of channel bonding e.g. instead of a single data channel being delivered over a single channel, they are multiplexed over a number of channels. ( A previous post talked about bonding in the ADSL world Sharedband: not enough bandwidth? )
    • Support of IPv6

    Roundup

    With the release of the DOCSIS 3.0 standard it looks like cable companies around the world are now set to be able to upgrade the bandwidth they will be able to offer to their customers in coming years. However, this will be an expensive upgrade for them to undertake with the need to upgrade head end equipment first and then followed by field cable modem upgrades over time. I would hazard a guess that it will be at least five years before the average cable user will be able to see the benefits.

    I also wonder about what price will need to be paid for the benefit of gaining higher bandwidth through channel bonding when there is limited spectrum available for data services on the cable system. A limit in subscriber number scalability?

    I was also interested to read about the possible adoption of IPv6 in DOCSIS 3.0. It was clear to me many years ago that IPv6 would ‘never’ (never say never!) on the Internet because of the scale of the task. It’s best chance would be in closed systems such as satellite access services and IPTV systems. Maybe, cable systems are an another option. I will catch up on IPv6 in a future post.


    PONs are anything but passive

    April 18, 2007

    Passive Optical Networks (PONs) are an enigma to me in many ways. On one hand the concept goes back to late 1980s and has been floating around ever since with obligatory presentations from the large vendors whenever you visited them. Yes, for sure there are Pacific Rim countries and the odd state or incumbent carrier in the western world deploying the technology, but they never seemed to impact my part of the world.On the other hand, the technology would provide virtual Internet nirvana for me at home with 100Mbit/s second available to support video on demand to each member of my family who, in 21st century fashion, have computers in virtually every home of the house! This high bandwidth still seems as far away as ever with an average speed of 4.5Mbit/s downstream bandwidth in the UK. I see 7Mbit/s as I am close to a BT exchange. We are still struggling to deliver 21st century data services over 19th century copper wires using ATM-based Digital Subscriber Line (DSL) technology. If you are in the right part of the country, you get marginally higher rates from your cable company. Why are carriers not installing optical fibre to every home?

    To cut to the chase, it’s because of the immense costs of deploying it. Fibre to the Home (FTTH) as it is known, requires the installation of a completely new optical fibre infrastructure between the carrier’s exchanges and homes. Such an initiative would almost require a government led and paid for initiative to make it worthwhile – which of course is what has happened in the far east. Here in the UK this is further clouded by the existing cable industry which has struggled to reach profitability based on massive investments in infrastructure during the 90s.

    What are Passive Optical Networks (PONs)?

    The key word is passive. In standard optical transmission equipment, used in the core of public voice and data networks, all of the data being transported is switched using electrical or optical switches. This means that investment needs to be made in the network equipment to undertake that switching and that is expensive. In a PON, instead of electrical equipment joining or splitting optical fibres, fibres are just welded together at minimum cost – just like T-junctions in domestic plumbing. Light travelling down the fibre then splits or joins when it hits a splice. Equipment in the carrier’s exchange (or Central Office [CO]), and the customer’s home then multiplex or de-multiplex an individual customer’s data stream.

    Although the use of PONs considerably reduces equipment costs as no switching equipment is required in the field and hence no electrical power feeds are required, it is still an extremely expansive technology to deploy making it very difficult to create a business case that stacks up. A major problem is that there is often no free space available in existing ducts pushing carriers to a new digs. Digging up roads and laying the fibre is a costly activity. I’m not sure what the actual costs are these days, but £50 per metre dug used to be the cost many years ago.

    As seems to be the norm in most areas of technology, there are two PON standards slugging it out in the market place with a raft of evangelists attached to both camps. The first is Gigabit PON (GPON) and the second is Ethernet PON (EPON).

    About Gigabit PON (GPON)

    The concept of PONs goes back to the early 1990s to the time when the carrier world was focussed on a vision of ATM being the world’s standard packet or cell based WAN and LAN transmission technology. This never really happened as I discussed in The demise of ATM but ATM lives on in other services defined around that time. Two examples are broadband Asynchronous DSL (ADSL) and the lesser known ATM Passive Optical Network (APON).

    APON was not widely deployed and was soon superseded with the next best thing – Broadband PON (BPON) also known as ITU-T G.983 as it was developed under the auspices of the ITU. More importantly APON was limited to the number of data channels it could handle and BPON added Wave Division Multiplex (WDM) (covered in Technology Inside in Making SDH, DWDM and packet friendly). BPON uses one wavelength for 622Mbit/s downstream traffic and another for 155-Mbit/s upstream traffic.

    If there are 32 subscribers on the system, that bandwidth is divided among the 32 subscribers-plus overhead. Upstream, a BPON system provides 3 to 5 Mbits/sec when fully loaded.

    GPON is the latest upgrade from this stable, uses an SDH data framing standard and provides a data rate of 2.5Gbit/s downstream and 1.25-Gbit/s upstream. The big technical difference is that GPONs are based on Ethernet and IP rather than ATM.

    It is likely that GPON will find its natural home in the USA and Europe. An example is Verizon who is deploying 622Mbit/s BPON to its subscribers but is committed to upgrade to GPON within twelve months. In the UK, BT’s OpenReach has selected GPON for a trial.

    About Ethernet PON (EPON)

    EPONs comes from the IEEE stable and is called IEEE 802.3ah. EPONs are based on Ethernet standards and derives the benefits of using this commonly adopted technology. EPON only uses a single fibre between the subscriber split and the central office and does not require any power in the field such as needed if a kerb-side equipment was required. EPON also supports downstream Point to Multipoint (P2MP) broadcast which is very important for broadcasting video. As with carrier-grade Ethernet standards such as PBB, some core Ethernet features such as CSMA/CD have been dropped in this new use of Ethernet. Only one subscriber at a time is able to transmit at any time using a Time Division Multiplex Access (TDMA) protocol.

    Typical deployment is shown in the picture below, one fibre to the exchange connecting 32 subscribers.

    EPON architecture (Source: IEEE)

    A Metro Ethernet Forum overview of EPON can be found here.

    The Far East, especially Japan, has taken EPON to its heart with the vast majority being installed by NTT, the major Japanese incumbent carrier, followed by Korea Telecom with 100s of thousands of EPON connections.

    Roundup

    There is still lots of development taking place in the world of PONs. On one hand 10Gbit/s EPON is being talked about to give it an edge over 2.5Gbit/s GPON. On the other, WDM PONs are being trialled in the Far East which would enable far higher bandwidths to be delivered to each home. WDM-PON systems allocate a separate wavelength to each subscriber, enabling the delivery of 100 Mbits/s or more .

    Only this month it was announced that a Japanese MSO Moves 160 Mbit/s using advanced cable technology (the subject of a future TechnologyInside post).

    DSL based broadband suffers from a pretty major problem. The farther the subscriber is away their local exchange, the lower the data rate that can be supported reliably, PONs do not have this limitation (well technically they do but the distance is much greater). So in the race to increase data rates in the home PONs are a clear cut winner along with cable technologies such as DOCSIS 3.0 used by cable operators.

    Personally, I would not expect that PON deployment will increase over and above its snail-like pace in Europe at any time in the near future. Expect to see the usual trials announced by the largest incumbent carriers such as BT, FT and DT but don’t hold your breath waiting for it to arrive at your door. This has been questioned recently in a government report where the lack of high-speed internet access could jeopardise the UK’s growth in future years.

    You may think so what – “I’m happy with 2 – 7Mbit/s ADSL!”, but I can say with confidence that you should not be happy. The promise of IPTV services are really starting to be delivered at long last and encoding bandwidths of 1 to 2Mbit’s really do not cut the mustard in the quality race. This is case for standard, let alone for high definition TV. Moreover, with each family member having a computer and television in their own room and each wanting to watch or listen to their own programmes simultaneously, low speed ADSL connections are far from adequate.

    One way out of this is to bond multiple DSL lines together to gain that extra bandwidth. I wrote a post a few weeks ago – Sharedband: not enough bandwidth? – who provides software to do just this. The problem is that you would require an awful lot of telephone lines to get 100Mbit/s which is what I really want! Maybe I should emigrate?

    Addendum #1: Economist Intelligence Unit: 2007 e-readiness rankings


    Chaos in Bangladesh’s ‘lllegal’ VoIP businesses

    April 11, 2007

    Chaos in Bangladesh’s Illegal VoIP business

    Take a listen to a report on BBC Radio Four’s PM programme broadcast on the 9th April which talks about the current chaos in Bangladesh brought about by the enforced closure of ‘illegal’ VoIP businesses. This is one of the impacts of the state of emergency imposed three months ago and has resulted in a complete breakdown of the Bangladeshi phone network.

    It seems that VoIP calls accounts for up to 80% of telephone traffic from abroad in the country driven by low call rates of between 1 and 2pence per minute.

    The new military backed government has been waging war on small VoIP businesses with the “illegality and corruptions of the the past being too long tolerated”. Many officials have been arrested, buildings pulled down and businesses closed.

    The practical result has thrown the telephone industry into chaos as hundreds of thousands of Bangladeshi’s living abroad try to call home home only to get the engaged tone.

    “In many countries VoIP is legal but in Bangladesh it has been long rumoured that high profile politicians have been operating the VoIP businesses and had an interest in keeping them outside of the law and unregulated to avoid taxes on the enormous revenues they generated.”

    The report says that the number of conventional phone lines is being doubled in April but to 30,000 lines but with a population of over 140 million people this is too few!

    You can listen to the report here Chaos in Bangladesh’s Illegal VoIP business Copyright BBC

    It really is amazing how disruptive a real disruptive technology can be, but when this happens it usually comes back to bite us!

    I talked about the Sim Box issue in Revector, detecting the dark side of VoIP, and the Bangladesh situation provides the reasoning about why incumbent carriers are often hell bent on stamping VoIP traffic out. In the western world, the situation is no different, but governments and carriers do not just bulldoze the businesses – maybe they should in some cases!

    Addendum #1: the-crime-of-voice-over-ip-telephony/


    iotum’s Talk-Now is now available!

    April 4, 2007

    In a previous post The magic of ‘presence’, I talked about the concept of presence in relation to telecommunications services and looked at different examples of how it had been implemented in various products.

    One of the most interesting companies mentioned was iotum, a Canadian company. iotum had developed what they called a relevance engine which enabled the provision of ability to talk and willingness to talk information into a telecom service by attaching it to appropriate equipment such as a Private Branch Xchanges (PBX) or a call centre Automatic Call Distribution (ACD) managers.

    One of the biggest challenges for any company wanting to translate presence concepts into practical services is how to make it useable rather than just being just a fancy concept that is used to describe a of a number peripheral and often unusable features of a service. Alec Saunders, iotum’s founder, has been articulating his ideas about this in his blog Voice 2.0: A Manifesto for the Future. Like all companies that have their genesis in the IT and applications world, Alec believes that “Voice 2.0 is a user-centric view of the world… “it’s all about me” — my applications, my identity, my availability.

    And rather controversially, if you come from the network or the mobile industry: “Voice 2.0 is all about developers too — the companies that exploit the platform assets of identity, presence, and call control. It’s not about the network anymore.” Oh by the way, just to declare my partisanship, I certainly go along with this view and often find that the stove-pipe and closed attitudes sometimes seen in mobile operators is one the biggest hindrances to the growth of data related applications on mobile phones.

    There is always a significant technical and commercial challenge to OEMing platform-based services to service providers and large mobile operators so the launch of a stand-alone service that is under complete control of iotum is not a bad way to go. Any business should have to full control of their own destiny and the choice of the relatively open Blackberry platform gives iotum a user base they can clearly focus on to develop their ideas.

    iotum launched the beta version of Talk-Now in January and provides a set of features that are aimed at helping Blackberry users to make better use of the device that the world has become addicted to using in the last few years. Let’s talk turkey, what does the Talk-Now service do?

    According to web site, as seen in the picture on the left, it provides a simple-in-concept bolt-on service for Blackberry phone users to see and share their availability status to other users.

    At the in-use end of the service, the Talk-Now service interacts with a Blackberry user’s address book by adding colour coding to contact names to show the individual’s availability. On initial release only three colours were used, white, red and green.

    Red and and green clearly show when a contact is either Not-Available or Available, I’ll talk about white in a minute. Yellow was added later, based on user feedback, to indicate an Interruptible status.

    The idea behind Talk-Now is that helps users reduce the amount of time they waste in non-productive calls and leaving voicemails. You may wonder how this availability guidance is provided by users. A contact with a white background provides the first indication of how this is achieved.

    Contacts with a white background are not Talk-Now users so their availability information is not available (!) so one of the key features of the service is an Invite People process to get them to use Talk-Now and see your availability information.

    If you wish a non-Talk-Now contact to see your availability, you can select their name from the contact list and send them an “I want to talk with you” email. This email will provide a link to an Availability Page as shown below. This email talks about the benefits of using the service (I assume) and asks you to use the service. This is a secure page that is only available to that contact and for a short time only.

    Once a contact accepts the invite and signs up to the service, you will be able to see their availability – assuming that they set up the service.

    So, how do you indicate your availability? This is set up with a small menu as shown on the left. Using this you can set up status information.

    Busy: set your free/busy status manually from your BlackBerry device

    In a meeting: iotum Talk-Now synchronizes with your BlackBerry calendar to know if you are in a meeting.

    At night: define which hours you consider to be night time.

    Blocked group: you can add contacts to the “blocked” group.

    You can also set up VIPs (Very Important Persons) who are individuals who receive priority treatment. This category needs to be used with care. Granting VIP status to a group overrides the unavailability settings you have made. You can also define Workdays. Some groups might be VIPs during work hours, while other groups might get VIP status outside of work. This is designed to help you better manage your personal and business communications.

    There is also a feature whereby you can be alerted when a contact becomes available by a message being posted on your Blackberry as shown on the right.

     

    Many of the above setting can be set up via a web page, for example:

    Setting your working week

    Setting contact groups

    However, it should be remembered that like Plaxo and LinkedIn, this web based functionally does require you to upload – ‘synchronise’ – your Blackberry contact list to the iotum server and many Blackberry users might object to this. It should be noted as well that the calendar is accessed as well to determine when you are in meetings and deemed busy.

    If you want to hear more, then take a look at the video that was posted after a visit with Alec Saunders and the team by Blackberry Cool last month:

    Talk-Now looks to be an interesting and well thought out service. Following traditional Web 2.0 principles, the service is provided for free today with the hope that iotum will be able to charge for additional features at a future date.

    I wish them luck in their endeavours and will be watching intensely to see how they progress in coming months.


    MPLS-TE and network traffic engineering

    April 2, 2007

    In my post entitled The rise and maturity of MPLS I said that one of the principle reasons for a carrier to implement MPLS was the need for what is known as traffic engineering in their core IP networks. Before the advent of MPLS this capability was supplied by ATM with many of the world’s terrestrial Internet backbones being based on this technology. ATM provided a switching capability in Points of Presence(PoPs) that enabled the automatic switchover to an alternative inter-city ATM pipe in case of failure. I say ‘intercity’ because ATM was not generally implemented on a transoceanic basis because ATM was deemed to be expensive and inefficient due to its 17% overhead commonly known as cell tax (Picture: Aria networks planning software. (Presentation to the UK Network Operators Forum 2006).IP engineers were keen to remove this additional ATM layer and replace it with it a control capability which became MPLS. However MPLS in its original guise did not really ‘cut the mustard’ for use in a traffic engineered regime so the standard was enhanced through the release of extensions known as MPLS-TE. This post will look at Traffic Engineering and its sibling activity Capacity Planning and their relationship to MPLS.

    Capacity planning

    Capacity planning is an activity that is undertaken by all carriers on a ‘regular’ basis. Nowadays, this would most likely be undertaken on an annual basis, although in the heady days of the latter half of the 90s of heavy growth it was unlikely to be undertaken on less than a quarterly basis.

    Capacity planning is an important function as it directly drives the purchase of additional network equipment or the purchase or building of addition bandwidth capacity for the physical network. Capacity planning is undertaken by the planning department and principally consists of the following activities:

    Network topology: The starting point of any planning exercise is to profile the existing network to act as a benchmark to build upon. This consists of two things. The first is a complete database of the nodes or PoPs and network link bandwidths i.e. their maximum capacities. This sounds easier than it is in reality. In many instances carriers do not know the full extent of the equipment deployed which is often the result of one too many acquisitions. This discovery of assets can either be based an on-going manual spreadsheet or database exercise or software can be used to automatically discover the up to date installed network topology. Another way is the export network configurations from network equipment such as routers.

    Traffic Matrices: What is needed next is detailed link resource utilisation data or traffic profiles for each service type. This is often called traffic Matrices. Links are the pipes interconnecting a network’s PoPs and their utilisation is how much of the links’ bandwidth is being used. As IP traffic is very dynamic and varies tremendously according to the time of day, good traffic engineering leads to good operational practice such as never loading a link beyond a certain percentage – say 50%. Every carrier would have their own standard which could be quite easily be made higher to save money but could risk poor network performance at peak times. Clearly, engineers and accountants have different perspectives in these discussions! (Raw IP traffic flow: Credit. Cariden.)

    Demand Forecast: At this point, capacity planning engineers make a request to their product marketing and sales brethren with a request for a service sales forecast for the next planning cycle which could between one and three years. If you talk to any planning engineer I’m sure you will hear plaintive cries such as “I can’t plan unless I get a forecast” however, can you think of a worse group of individuals to get this sort of information from than sales people? I would guess that this is one of the biggest challenges planning departments face.

    Once topology, current traffic matrices and forecasts for each service (IP transit, VoIP, IP VPNs IPTV etc.) has been obtained then the task of planning for the next capacity planning period can begin. This results in – or should result in a clear plan for the company that covers such issues as:

    • What existing link upgrades are required
    • What new links are required
    • What new or expansion to backup links are required
    • What new Capital Expenditure (CAPEX) is required
    • What increase in Operational Expenditure (OPEX) is required
    • What new migration or changeover plans are required
    • Lots of management reports, spreadsheets and graphs
    • Caveats about the on-going unreliability of the growth forecasts received!

    Traffic engineering (TE)

    While Capacity Planning is a long-term forward looking activity that is concerned with optimising network growth and performance in the face of growing service demand, traffic engineering is focused on how the network performs in respect of delivering services at a much finer granularity.

    Traffic engineering in networks has a history as long as telephones have been around and was closely associated with A.K. Erlang. One of the fundamental metrics in the voice Public Switched Telephony Networks (PSTN) was named after him- the Erlang. An Erlang is a measure of the occupancy or utilisation of voice links regardless of whether traffic was flowing is not. Erlang-based calculations were / are used to calculate Quality of Service (QoS) and the optimum utilisation of fixed bandwidth links taking into account the amount of traffic at peak times.

    Traffic engineering is even more important in the highly dynamic world of IP networks and carriers are able to experience a considerable number of benefits if traffic engineering is taken seriously by management.

    • Cost optimisation: Providing network links is an expensive pastime by the time you take IP equipment, optical equipment, OSS costs and OPEX into account. The more that a network is fully utilised without degradation, the more money can flow to the bottom line.
    • Congestion management: If a network is badly traffic engineered either through under-planning, under-spending or under-resourcing, the more chance there is for network problems such as outages or congestion to impact a customer’s experience. The telecoms world is stuffed full of examples of where this has happened.
    • Dynamic services and traffic profiles: Traffic profiles and flow can change quite considerably over a period of time when new services with different traffic profiles are launched without involving network planners. In an age when when there is considerable management pressure to reduce new time-to-market timescales this can happen more often that many companies would admit to.
    • Efficient routing: In MPLS and the limitations of the Internet I wrote about about one of the strengths of the IP protocol was that a packet could always find a path to its destination if one existed but that strength created problems when the service required predictable performance. Traffic engineered networks provide paths for critical services that are deterministic / predictable from a path perspective and from a Quality of Service (QoS) perspective. It would not be an overstatement to say that this is pretty much mandatory in these days of converged Next Generation networks.
    • Availability, resilience and fast restoration: If a network’s customers sees an outage at any time, the consequences can be catastrophic from a churn or brand image perspective so high Availability is a crucial network metric that needs to be monitored.. There is a tremendous difference in perceived reliability between PSTN voice network and IP networks. For example, tell me the last time your home telephone broke down? It’s not that PSTN networks are more reliable that IP networks, they’re not, it’s just that those PSTN networks have been better designed to transparently work around broken equipment or broken links. Subscribers, to use that old telephony term, are blissfully unaware of a network outage. Of course, if a digger cuts through a major fibre and the SDH backbone ring that is not actually a ring… Well, that’s another story.
    • QoS and new services: Real time services need an ability to separate latency-critical services such as VoIP from non-critical services such as email. Traffic engineering is a critical tool in achieving this.

    Multi-protocol Label Switching – Traffic Engineering (MPLS-TE)

    The term ‘-TE’ is used in relation to describe other services as well, notably in the attempts to make Ethernet carrier grade in quality which is discussed in my posts on PBB-TE and T-MPLS which is being built on the back of MPLS-TE (Picture credit OpNet planning software).

    As mentioned above, before the advent of MPLS-TE, carriers of IP traffic relied on the underlying transport for traffic engineering – networks such as ATM. MPLS-TE consisted of a set of extensions to MPLS that enabled native traffic engineering within an MPLS environment. Of course, this does not remove the need to traffic engineer any layer-1 transport network that MPLS may be transported over. MPLS-TE was covered in in the IETF RFC 2702 standard.

    What does MPLS-TE bring to the TE party?

    (1) Explicit or constraint-based end-to-end routing: The picture below shows a small network where traffic flowing from the left could travel via two alternative paths to exit on the right. This was specifically the environment that Internal Gateway Routing (IGP) routing algorithms such as Open shortest Path (OSPG) and Intermediate System – Intermediate System (IS-IS) were designed to operate in by specifically routing all traffic over the shortest path as shown below (Picture credit: NANOG).

    This inevitably could lead to problems with the north path shown above becoming congested while the south path remains unused wasting expensive network assets. Before MPLS-TE, standard IGP routing algorithm metrics could be ‘adjusted’, ‘manipulated’ or ‘tweaked’ to reduce this possibility, however, doing this could be very complicated and be very challenging to manage on a day-to-day basis. Such an approach usually required a network-wide plan. In other words, it is a bit of a horror to manage.

    With MPLS-TE, using an extension to signalling protocol Resource Reservation Protocol (RSVP) known as, not unsurprisingly as RSVP-TE, explicit paths can be set up to force selected traffic flows to flow over through them as shown below.

    This deterministic routing helps with reducing congestion on particular links, helps load the network more evenly thus reducing the number of ‘orphaned links’, ensures optimal utilisation of the network, helps planners separate latency dependent services from non-critical services and better manage upgrade costs (Picture credit: NANOG).

    These paths are called TE tunnels or label switched paths (LSPs). LSPs are unidirectional so two need to be specified to handle bi-directional traffic between two nodes.

    (2) Constraint Based Routing: Network planners are now able to undertake what is known as Constraint Based Routing where traffic paths can be computed (aka, Path Computation) that meets certain constraints other than the path with the least number of nodes or PoPs as drives OSPF and IS-IS. This could be links with the least utilisation, least delay, with the most free bandwidth, or links that utilise a carriers own, rather than a partner’s infrastructure .

    (3) Bandwidth reservation: MPLS-TE DiffServe (DS-TE) enables per-class TE across an MPLS-TE network. Physical interfaces and TE tunnels / LSPs can be told how much bandwidth can be reserved or used. This can be used to dynamically allocate, share and adjust over time bandwidth given to critical services such as VoIP and to best effort traffic such as Internet browsing and email.

    (4) Fast Re-Route (FRR): MPLS-TE supports local rerouting around a faulty node (node protection) or faulty link (Link Protection). Planners can define alternative paths to be used when failure occurs. FRR can reroute traffic in tens of milliseconds minimising down time. However, although FRR sounds like a good idea, the amount of computing effort required for calculating FRR paths for a complete network is very significant. If a carrier does not have the appropriate path computation tools, using FRR could cause significant problems by rerouting traffic non-optimally to segment of network that is already congested rather than a segment that is under-utilised (Picture: An LSP tunnel and its backup, Credit: Wandl).

    There are other additions to MPLS covered by the MPLS-TE extensions but these are minor compared to ones described above.

    Practical use of MPLS-TE

    One would imagine that with all the benefits that would accrue to a carrier by using MPLS-TE such as enhancing service quality, enhancing new service deployment and reducing risk, carriers would flock to using MPLS-TE. However, this is not necessarily the case as, with all new technologies, there are alternatives such as:

    • Traditional over-provisioning: Traffic engineering management can be a very complicated task if you attempt to analyse all the flows within a large network. One of the traditional ways that many carriers get round this onerous and challenging task is to simply well over-provision their networks. If a network is geographically constrained or is simply simple, then throwing bandwidth at the network can be seen as a simple and unchallenging solution. Network equipment is so much cheaper than it used to be (and smaller carriers can buy equipment from eBay – cough, cough!). Dark fibre or multiGbit/s links can be bought or leased relatively cheaply as well. So why bother putting in the effort to traffic engineer a network properly?
    • The underlying network does the TE: Many carriers still use the underlying network to traffic engineer their networks. Although ATM is not around as much as it used to be, SDH still is.
    • Stick with IGP adjustment: Many carriers still stick to simple IGP metric adjustment discussed earlier as it handles the simple TE activities they require. True, many would moan about how difficult to manage this is but to migrate to an MPLS-TE environment could be seen as a costly exercise and they currently do not have the time, resource or money to undertake the transitiuon.
    • Let’s wait and see: There are so many considerations and competitive options that the easiest decision to make is to do nothing.

    Round up

    IP traffic engineering is a hot subject and brings forth a considerable variety of views and emotions when discussed. Many carriers have stuck with methods that they have outgrown but are hesitant about making the jump thinking that something better will come along. Many try to avoid the issue completely by simply over-provisioning and taking a an an ultra-KISS approach.

    However, those carriers that are truly pursuing a converged Next Generation architecture with all services based on IP and legacy services carried by pseudowire tunnels, cannot avoid the issue of undertaking real traffic engineering to the degree undertaken by the old PSTN planning departments. To a great extent, this could be seen as the wild child of IP networks growing up to become a mature and responsible adults. The IP industry has a long way to go though as creating standards can be seen as difficult enough but getting people to use them is something else!

    Whatever else, simply sitting and waiting is not the solution…

    Aria Networks (UK)

    the economics of network control (USA)

    Making networks perform (USA)
    You have one network, We have one plan (USA)

    Wandl wide area design laboratory (USA)

    Addendum #1: The follow-on article to this post is: Path Computation Element (PCE): IETF’s hidden jewel

    Addendum #2: GMPLS and common control

    Addendum #3: Aria Networks shows the optimal path


    Colo crisis for the UK Internet and IT industry?

    March 29, 2007

    The UK Internet and IT industries are facing a real crisis that is creeping up on them at a rate of knots (Source: theColocationexchange).In the UK we often believe that we are at the forefront of innovation and the delivery of creative content services such as Web 2.0 based and IPTV, but this crisis could force many of these services to be delivered from non-UK infrastructure over the next few years.

    So, what are we talking about here? It’s the availability of colocation (colo) services and what you need to pay to use them. Colocation is an area where the UK has excelled and has lead Europe for a decade, but this could be set to change over the next twelve months.

    It’s no secret to anyone that hosts an Internet service that prices have gone through the roof for small companies in the last twelve months, forcing many of the smaller hosters to just shut up shop. The knock-on effects of this will have a tremendous impact on the UK Internet and IT industries as it also impacts large content providers, content distribution companies such as Akamai, telecom companies and core Internet Exchange facilities such as LINX. In other words, pretty much every company involved in the delivering Internet services and applications.

    We should be worried.

    Estimated London rack pricing to 2007 (Source: theColocationexchange)

    The core problem is that available co-location space is not just in short supply in London, it is simply disappearing at an alarming and accelerating rate as shown in the chart below (It is even worse in Dublin). It could easily run out to anyone who does not have a deep pocket.

    Estimated space availability in London area (Source: theColocationexchange)

    What is causing this crisis?

    Here are some of the reasons.

    London’s ever increasing power as a world financial hub: According to the London web site: “London is the banking centre of the world and Europe’s main business centre. It is the headquarters of more than 100 of Europe’s 500 largest companies are in London and a quarter of the of the world’s largest financial companies have their European headquarters in London too. The London foreign exchange market is the largest in the world, with an average daily turnover of $504 billion, more than the New York and Tokyo combined.”

    This has been a tremendous success for the UK and driven a phenomenal expansion of financial companies needs for data centre hosting and they have turned to 3rd party colo providers to provide these needs. In particular, the need for disaster recovery has driven them to not only expand their own in-hose capabilities but to also place infrastructure in in 3rd party facilities. Colo companies have welcomed these prestigious companies with open arms in the face of the telecomms industry meltdown post 2001.

    Sarbanes-Oxley compliance: The necessity for any company that operates in the USA to comply with the onerous Sarbanes-Oxley regulations has had a tremendous impact on the need to manage and audit the capture, storage, access, and sharing of company data. In practice, more analysis and more disk storage are needed leading to more colo space requirements.

    No new colo build for the last five years: As in the telecommunications world, life was very difficult for colo operators in the five years following the new millennium. Massive investments in the latter half of the 1990s was followed by pretty much zero expansion of the industry which remained effectively in stasis. One exception to this is IX Europe who are expanding their facilities around Heathrow. However, builds such as this will not have any great impact on the market overall even though it will be highly profitable for the companies expanding.

    However, in the last 24 months both the telecomms and the colo industries are seeing a boom in demand and a return to a buoyant market last seen in the late 1990s (Picture credit: theColocationexchange).

    Consolidation: In London particularly, there has been a large trend to consolidation and roll-up of colo facilities. A prime example of this would be Telecity, (backed by private equity finance from 3i Group, Schroders and Prudential),  who have bought Redbus and Globix in the last twenty four months. This included a number of smaller colo operators that focused on supplying smaller customers. The now larger operators have really concentrated on winning the lucrative business of corporate data centre outsourcing which is seen to be highly profitable with long contract periods.

    Facility absorption: In a similar way that many telecommunications companies were sold at very low prices between post 2000,  the same trend happened in the colo industry. In particular, many of the smaller colos west of London were bought at knock down valuations by carriers, large third party systems integrators and financial houses. This effectively took that colo space permanently off the market.

    Content services: There has been a tremendous infrastructure growth in the last eighteen months by the well known media and content organisations. This includes all the well known names such as Google, Yahoo and Microsoft. It also includes companies delivering newer services such as IP TV and content distribution companies such as Akamai. It could be said with justification, that this growth is only just beginning and these companies are competing directly with the financial community, enterprises and carriers for what little colo space there is left.

    Carrier equipment rooms: Most carriers have their own in-house colo facilities to house their own equipment or offer colo services to their customers. Few colos have invested in increasing in these facilities in the last few years so most are now 100% full forcing carriers to go to 3rd party colos for expansion.

    Instant use: When enterprises buy space today they immediately use it rather then letting it lie fallow.

    How has the Colo industry reacted to this ‘Colo 2.0’ spurt of growth?

    With demand going through the roof and with a limited amount of space available in London, it is clearly a seller’s market. Users of colo facilities have seen rack prices increase at an alarming rate. For example: Colocation Price Hikes at Redbus.

    However, the rack price graph above does not tell the whole story as power, which used to be a small additional charge or even thrown in for free, have risen by a factor of three or even four in the last twelve months.

    Colos used to focus on selling colo space solely on the basis of rack footprints. However, the new currency they use is Amperes, not square feet measured in rack footprints. This is an interesting aspect that is not commonly understood by individuals who have not had to by buy space in the last twelve months.

    This is caused because colo facilities are not only capped in the amount of space they have to place racks, they also have caps on the amount of electricity that a site can take from their local power companies. Also, as a significant percentage of this input power is turned into heating the hosted equipment, colo facilities have needed to make significant investment in coolers to keep the equipment operating within their temperature specifications. They also need to invest in appropriate back-up power generators and batteries to power the site in case of a external power failure.

    Colo contracts are now principally based on the amount of current the equipment consumes, not its footprint. If the equipment in a rack only takes up a small space but consumes, say 8 to 10 Amps, then the rest of the rack has to remain empty unless you are willing pay an additional full-rack’s worth of power.

    If a rack owner sub-lets shelf space in a rack to a number of their customers, each one has to be monitored with individual Ammeters placed on each shelf.

    One colo explains this to their customers in a rather clumsy way:

    “Price Rises Explained: Why has there been another price change?

    By providing additional power to a specific rack/area of the data floor, we are effectively diverting power away from other areas of the facility, thus making that area unusable. In effect, every 8amps of additional power is an additional rack footprint.

    The price increase reflects the real cost to the business of the additional power. Even with the increase, the cost per additional 8amps of power is still substantially less, almost half the cost of the standard price for a rack foot print including up to 8amp consumption.”

    Another point to bear in mind here is the current that individual servers consume. With the sophisticated power control that is embedded into today’s servers – just like your home PC – there is a tremendous difference in the amount of current a server consumes in its idle state compared to full load. The amount of equipment placed in a rack is limited by the possible full load current consumption even if average consumption is less. In the case of an 8 Amp rack limit, there would also be a hard trip installed by the colo facility that turns the rack off if current reaches say 10 Amps.

    If the equipment consists of standard servers or telecom switches this can be accommodated relatively easily, but if a company offers services such as interactive games or IPTV service and fills a rack with blade (card) servers, this can quite easily consume 15 to 20kW of power or 60 Amps! I’ll leave it you to undertake the commercial negotiations with your colo but take a big chequebook!

    What could be the consequences of the up and coming crisis?

    The empty floor as seen in the picture are long gone in colo facilities in London.

    Virtual hosters caught in a trap: Clearly if a company does not own its own colo facilities but offers colo based services to their customers, it could prove to be very difficult and expensive  to guarantee access to sufficient space, sorry Amperage, to meet their customer’s growth needs in an exploding market. As in the semiconductor market where fabless companies are always the first hit in boom times, those companies that rely on 3rd party colos could have significant challenges facing them in coming months.

    No low cost facilities will hit small hosting companies:  The issues raised in this post are significant ones even for multinational companies, but for small hosting companies they are killers. Many small hosting companies who supply SME customers have already put the shutters on their business as it has proved not to be cost effective to pass these additional costs on their customers.

    Small Web 2.0 service companies and start-ups: The reduction in availability of  low-cost colo hosting could have a tremendous impact on small Web 2.0 service development companies where cash flow is always a problem. Many of these companies used to go to a “sell it cheap” colo but there are fewer and fewer of that can be resorted to. If small companies go to these lower cost colos then you can placing their services in potential jeopardy as the colo might have only one carrier fibre connection to the facility and if that goes down or no power back-up capabilities…

    Its not so easy to access lower cost European facilities:  There is space available in some mainland European cities and at rates considerably lower than those seen in London. However, their possible use does raise some significant issues:

    • A connection needs to be paid for between the colo centres. If talking about multi Gbit/s bandwidths these does not come cheap. They also need to be backed up by at least a second link for resilience.
    • For real time applications – games or synchronous backup, the addition transit delays can prove to be a significant issue.
    • Companies will need local personal to support your facility and this can be very expensive and also represents an expensive and long term commitment  in many European counties.

    I called this post a Crisis for the UK Internet and IT industry? I hope that the issues outlined do not have a measurable negative impact in the UK, but I’m really not sure that that will be the case. Even if there is a rush to build new facilities in 2007 it will take 18 to 24 months for them to come on line. If this trend continues, a lot of application and content service providers will be forced to provide them from Europe or the USA with a consequent set of knock-on effects for the UK.

    I hope I’m wrong.

    Note: I would like to acknowledge a presentation given by Tim Anker of the theColocationexchange at the Data Centres Europe conference in March 2007 which provided the inspiration for this post. Tim has been concerned about this issues and has been bringing to the attention of companies for the last two years.


    Ethernet-over-everything

    March 26, 2007

    And you thought Ethernet was simple! It seems I am following a little bit of an Ethernet theme at the moment, so I thought that I would have a go at listing all (many?) of the ways Ethernet packets can be moved from from one location to another. Personally I’ve always found this confusing as there seems to be a plethora of acronyms and standards. I will not cover wireless standards in this post.

    Like IP (Internet protocol not Intellectual Property!), the characteristics of an Ethernet connection is only as good as the bearer service, it is being carried over and thus most of the standards are concerned with that aspect. Of course, IP is most often carried over Ethernet so performance characteristics of the Ethernet data path bleed through to IP as well. Aspects such as service resilience and Quality of Service (QoS) are particularly important.

    Here are the ways that I have come across to transport Ethernet.

    Native Ethernet

    Native Ethernet in its original definition runs over twisted-pair, coaxial cables or fibre (Even though Metcalfe called their cables The Ether). A core feature called carrier sense multiple access with collision detection (CSMA/CD) enabled multiple computers to share the same transmission medium. Essentially this works by a node resending a packet when it did not arrive at its destination because it was lost by colliding with a packet sent from another node at the same time. This is one of the principle aspect of native Ethernet that is when used on a wide area basis as it is not needed.

    Virtual LANs (VLANs): An additional capability to Ethernet was defined by the IEEE 802.1Q standard to enable multiple Ethernet segments in an enterprise to be bridged or interconnected sharing the same physical coaxial cable or fibre while keeping each bridge private. VLANs are focused on single administrative domain where all equipment configurations are planned and managed by a single entity. What is know as Q-in-Q (VLAN stacking) emerged as the de facto technique for preserving customer VLAN settings and providing transparency across a provider network.

    IEEE 802.1ad (Provider Bridges) is an amendment to IEEE 802.1Q-1998 standard that the definition of Ethernet frames with multiple VLAN tags.

    Ethernet in the First Mile (EFM): In June, 2004, the IEEE approved a formal specification developed by its IEEE 802.3ah task force. EFM focuses on standardising a number of aspects that will help Ethernet from a network access perspective. In particular it aims to provide a single global standard enabling complete interoperability of services. The standards activity encompasses: EFM over fibre, EFM over copper, EFM over passive optical network and Ethernet First Mile Operation, Administration, and Maintenance. Combined with whatever technology a carrier deploys to carry Ethernet over its core network, EFS enables full end-to-end Ethernet wide area services to be offered.

    Over Dense Wave Division Multiplex (DWDM) optical networks

    10GbE: The 10Gbit/s Ethernet standard was published in 2006 and offers full duplex capability by dropping CSMA/CD. 10GbE can be delivered over carrier’s DWDM optical network.

    Over SONET / SDH

    Ethernet over Sonet / SDH (EoS): For those carriers that have deployed SONET / SDH networks to support their traditional voice and TDM data services, EoS is a natural service to offer following a keep it simple approach as it does not involve tunnelling as would be needed using IP/MPLS as the transmission medium. Ethernet frames are encapsulated into SDH Virtual Containers. This technology is often preferred by customers as it does not involve the transmission of Ethernet via encapsulation over an IP or MPLS shared network which is often seen as a perceived performance or security risk by enterprises (I always see this as a non-logical concern as ALL public networks use shared networks at all levels).

    Link Access Procedure – SDH (LAPS): LAPS is a variant of the original LAP protocol, is an encapsulation scheme for Ethernet over SONET/SDH. LAPS provides a point-to-point connectionless service over SONET/SDH. and enables the encapsulation of IP and Ethernet data.

    Over IP and MPLS:

    Layer 2 Tunnelling Protocol (L2TP). L2TP was originally standardised in 1999 but an updated version was published in 2005- L2TPv3. L2TP is a Layer 2 data-link protocol that enables data link protocols to be carried on IP networks along side PPP. This includes Ethernet, frame relay and ATM. L2TPv3 is essentially a point-to-point tunnelling protocol that is used to interconnect single- domain enterprise sites.

    L2TPv3 is also known as a Virtual Private Wire service (VPWS) and is aimed at native IP networks. As it is a pseudowire technology it is grouped with Any Transport over MPLS (AToM).

    Layer 2 MPLS VPN (L2VPN): Customer’s networks are separated from each other on a shared MPLS network using MPLS Label Distribution Protocol (LDP) to set up point-to-point Pseudo Wire Ethernet links. The picture below shows individual customer sites that are relatively near to each other connected by L2TPv3 or L2VPN tunnelling technology based on MPLS Label Switched paths.

    Virtual Private LAN Service (VPLS): A VPLS is a method of providing a fully meshed multipoint wide area Ethernet service using Pseudo Wire tunnelling technology. VLPS is a Virtual Private Network (VPN) that enables all LANs on a customer’s premises connected to it are able to communicate with each other. A new carrier that has invested in an MPLS network rather than an SDH / SONET core network would use VPLS to offer Ethernet VPNs to their customers. The picture below shows a VPLS with LSP link containing multiple MPLS Pseudo-Wires tunnels.

    MEF: The MEF defines several types of Virtual Private Wire Services (VPWS) services:

    Ethernet Private Line (EPL). An EPL service supports a single Ethernet VC (EVC) between two customer sites.

    Ethernet Virtual Private Line (EVPL). An EVPL service supports multiple EVCs between two two customer sites.

    Virtual Private Line Service (VPLS) or Ethernet LAN (E-LAN) service supports multiple EVCs between multiple customer sites.

    These MEF-created service definitions, which are not standards as such (indeed they are independent of standards), enable equipment vendors and service providers to achieve 3rd certification for their products.

    Looking forward:

    100GbE: In 2006, the IEEE’s Higher Speed Study Group (HSSG), tasked with exploring what Ethernet’s next speed might be, voted to pursue 100G Ethernet over other offerings, such as 40Gbit/s Ethernet to be delivered in the 2009 /10 time frame. The IEEE will work to standardize 100G Ethernet over distances as far as 6 miles over single-mode fiber optic cabling and 328 feet over multimode fibre.

    PBT or PBB-TE: PBT is a group of enhancements to Ethernet that are defined in the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) group. I’ve covered this in Ethernet goes carrier grade with PBT / PBB-TE?

    T-MPLS: -MPLS is a recent derivative of MPLS – I have covered this in PBB-TE / PBT or will it be T-MPLS?

    Well, I hope I’ve covered most of the Ethernet wide area transmission standards activities here. If I havn’t I’ll add others as addendums. At least they are all on one page!


    Islands of communication or isolation?

    March 23, 2007

    One of the fundamental tenets of the communication industry is that you need 100% compatibility between devices and services if you want to communicate. This was clearly understood when the Public Switched Telephone Network (PSTN) was dominated by local monopolies in the form of incumbent telcos. Together with the ITU, they put considerable effort into standardising all the commercial and technical aspects of running a national voice telco.

    For example, the commercial settlement standards enabled telcos to share the revenue from each and every call that made use of their fixed or wireless infrastructure no matter whether the call originated, terminated or transited their geography. Technical standards included everything from compression through to transmission standards such as Synchronous Digital Hierarchy (SDH) and the basis of European mobile telephony, GSM. The IETF’s standardisation of the Internet has brought a vast portion of the world’s population on line and transformed our personal and business lives.

    However, standardisation in this new century is now often driven as much by commercial businesses and business consortiums which often leads to competing solutions and standards slugging it out in the market place (e.g. PBB-TE and T-MPLS). I guess this is as it should be if you believe in free trade and enterprise. But, as mere individuals in this world of giants, these issues can cause us users real pain.

    In particular, the current plethora of what I term islands of isolation means that we often unable to communicate in ways that we wish to. In the ideal world, as exemplified by the PSTN, you are able to talk to every person in the world that owns a phone as long as you know their number. Whereas, many, if not most, new media communications services we choose to use to interact with friends and colleagues are in effect closed communities that are unable to interconnect.

    What are the causes these so-called islands of isolation? Here are a few examples.

    Communities: There are many Internet communities including free PC-to-PC VoIP services, instant messaging services, social or business networking services or even virtual worlds. Most of these focus on building up their own 100% isolated communities. Of course, if one achieves global domination, then that becomes the de facto standard by default. But, of course, that is the objective of every Internet social network start-up!

    Enterprise software: Most purveyors of proprietary enterprise software thrive on developing products that are incompatible. Lotus Notes and Outlook email systems was but one example. This is often still the case today when vendors bolt advanced features onto the basic product that are not available to anyone not using that software – presence springs to mind. This creates vendor communities of users.

    Private networks: Most enterprises are rightly concerned about security and build strong protective firewalls around their employees to protect themselves from malicious activities. This means that employees of that company have full access to their own services but these are not available to anyone outside of the firewall for use on an inter-company basis. Combine this with the deployment of vendor specific enterprise software described about and you create lots of isolated enterprise communities!

    Fixed network operators: It’s a very competitive world out there and telcos just love offering value-added features and services that are only offered to their customer base. Free proprietary PC-PC calls come to mind and more recently, video telephones.

    Mobile operators: A classic example with wireless operators was the unwillingness to provide open Internet access and only provide what was euphemistically called ‘walled garden’ services – which are effectively closed communities.

    Service incompatibilities: A perfect example of this was MMS, the supposed upgrade to SMS. Although there was a multitude of issues behind the failure of MMS, the inability to send an MMS to a friend who used another mobile network was one of the principle ones. Although this was belatedly corrected, it came too late to help.

    Closed garden mentality: This idea is alive and well amongst mobile operators striving to survive. They believe that only offering approved services to their users is in their best interests. Well, no it isn’t!

    Equipment vendors: Whenever a standards body defines a basic standard, equipment vendors nearly always enhance the standard feature set with ‘rich’ extensions. Of course, anyone using an extension could not work with someone who was not! The word ‘rich’ covers a multiplicity of sins.

    Competitive standards: Users groups who adopt different standards become isolated from each other – the consumer and music worlds are riven by such issues.

    Privacy: This is seen as such an important issue these days that many companies will not provide phone numbers or even email addresses to a caller. If you don’t know who you want, they won’t tell you! A perfect definition of a closed community!

    Proprietary development:  In the absence of standards companies will develop pre-standard technologies and slug it out in the market. Other companies couldn’t care less about standards and follow a proprietary path just because they can and have the monopolistic muscle to do so. Bet – you can name one or two of those!

    One take away from all this is that in the real world you can’t avoid islands of isolation and all of us have to use multiple services and technologies to interact with colleagues that are effectively islands of isolation and will probably remain so for the indefinite future in the competitive world we live in.

    Your friends, family and work colleagues, by their own choice, geography and lifestyle, probably use a completely different set of services to yourself. You may use MSN, while colleagues use AOL or Yahoo Messenger. You may choose Skype but another colleague may use BT Softphone.

    There are partial attempts at solving these issues with a subset of islands, but overall this remains a major conundrum that limits our ability to communicate at any time, any place and any where. The cynic in me says that if you hear about any product or initiative that relies on these islands of isolation disappearing to succeed I would run a mile – no ten miles! On the other hand, it could be seen as the land of opportunity?