The Cloud hotspotting the planet

July 25, 2007

I first came across the first across the embryonic idea behind The Cloud in 2001 when I first met its Founder, George Polk. In those days George was the ‘Entrepreneur in Residence’ at iGabriel, an early stage VC formed in the same year.

One of his first questions was “how can I make money from a Wi-Fi hotspot business?” I certainly didn’t claim that I knew at the time but sure as eggs is eggs I guess that George, his co-founder Niall Murphy and The Cloud team are world experts by now! George often talked about environmental issues but I was sorry to hear that he had stepped down from his CEO position (he’s still on the Board) to work on climate change issues.

The vision and business model behind The Cloud is based on the not unreasonable idea that we all now live in a connected world where we use multiple devices to access the Internet. We all know what these are: PCs, notebooks, mobile phones, PDAs and games consoles etc. etc. Moreover, we want to transparently use any transport bearer that is to hand to access the Internet, no matter where we are or what we are doing. This could be DSL in the home, a LAN in the office, GPRS on a mobile phone or a Wi-Fi hotspot.

The Cloud focuses on the creation and enablement of public Wi-Fi so that consumers and business people are able to connect to the Internet where ever they may be located when out and about.

One of the big issues with Wi-Fi hotspots back in the early years of the decade (and it still is but less so these days), was that Wi-Fi hotspot provision industry was highly fractured with virtually every public hotspot being managed by a different provider. When these providers wanted to monetise their activities it seemed that you needed to set up a different account at each site you visited. This cast a big shadow over users and slowed down market growth considerably.

What was needed in the market place was Wi-Fi aggregators or market consolidation that would allow a roaming user to seamlessly access the Internet from lots of different hotspots without having to having multiple accounts.

Meeting this need for always on connectivity is where The Cloud is focused and their aim is to enable wide-scale availability of public Wi-Fi access through four principle methods:

  1. Direct deployment of hot spots:(a) In coffee shops, airports public houses etc. in partnership with the owners of these assets.(b) In wide area locations such as city centre in partnership with local councils.
  2. Wi-Fi extensions of existing public fixed IP networks .
  3. Wi-Fi extension of existing private enterprise networks – “co-opting networks”
  4. Roaming relationships with other Wi-Fi operators and service providers, such as with iPass in 2006.

The Cloud’s vision is to stitch together all these assets and create a cohesive and ubiquitous Wi-Fi network to enable Internet access at any location using the most appropriate bearer available.

It’s The Cloud’s activities in 1(a) above that is getting much publicity at the moment as back in April the company announced coverage of the City of London in partnership with City of London Corporation. The map below shows the extent of the network.

Note: However, The Cloud will not have everything all to itself in London as a ‘free’ WiFi Thames based network has just been launched (July 2007) by Meshhopper.

On July 18th 2007 The Cloud announced coverage of Manchester city centre as per the map below:

These network roll-outs are very ambitious and are some largest deployments of wide-area Wi-Fi technology in the world so I was intrigued as to how this was achieved and what challenges were encountered during the roll out.

Last week I talked with Niall Murphy, The Cloud’s Co-Founder and Chief Strategy Officer, to catch up with what they were up to and to find out what he could tell me about the architecture of these big Wi-Fi networks.

One of my first questions in respect of the city-centre networks was about in-building coverage as even high power GSM telephony has issues with this and Wi-Fi nodes are limited to a maximum power of 100mW.

I think I already knew the answer to this, but I wanted to see what The Cloud’s policy was. As I expected, Niall explained that “this is a challenge” and consideration of this need was not part of the objective of the deployments which are focused on providing coverage in “open public spaces“. This has to be right in my opinion as the limitation in power would make this an unachievable objective in practice.

Interestingly, Niall talked about The Cloud’s involvement in OFCOM‘s investigation to evaluate whether there would be any additional commercial benefit by allowing transmit powers greater tha 100mW. However, The Cloud’s recommendation was not to increase power for two reasons:

  1. Higher power would create a higher level of interference over a wider area which would negate the benefits of additional power.
  2. Higher power would negatively impact battery life in devices.

In the end, if I remember correctly, the recommendation by OFCOM was to leave the power limits as they were.

I was interested in the architecture of the city-wide networks as I really did not know how they had gone about the challenge. I am pretty familiar with the concept of mesh networks as I tracked the path of one of the early pioneers in the UK of this technology, Radiant Networks. Unfortunately, Radiant went to the wallRadiant Networks flogged – in 2004 for reasons I assume to be concerned with the use of highly complex, proprietary and expensive nodes (as shown on the left) and the use of the 26, 28 and 40Ghz bands which would severely impact economics due to small cell sizes.

Fortunately, Wi-Fi is nothing like those early proprietary approaches to mesh networks and the technology has come of age due to wide-scale global deployment. More importantly, this has also led to considerably lower equipment costs. The reason that this is that Wi-Fi uses the 2.4GHz ‘free band’ and most countries around the world have standardised on the use of this band giving Wi-Fi equipment manufacturers access to a truly global market.

Anyway getting back to The Cloud, Niall, said that “the aims behind the City of London network was to provide ubiquitous coverage in public spaces to a level of 95% which we have achieved in practice“.

The network uses 127 nodes which are located on street lights, video surveillance poles or other street furniture owned by their partner, the City of London Corporation. Are 127 nodes enough I ask? Niall’s answer was an emphatic “yes” although “the 150 metre cell radius and 100mW power limitation of Wi-Fi definitely provides a significant challenge“.

Interestingly, Niall observed that deploying a network in the UK was much harder than in the US due to the lower power levels of the 2.4Ghz band than in the USA. The Cloud’s experience has shown that a cell density two or three times greater is required in a UK city – comparing London to Philadelphia for example. This raises a lot of interesting questions about hotspot economics!

Much time was spent on hotspot planning and this was achieved in partnership with a Canadian company called Belair Networks. One of the interesting aspects of this activity was that there was “serious head scratching” by Belair as being a Canadian company they were used to nice neat square grids of streets and not the no-straight-line topology mess of London!

Data traffic from the 127 nodes that form The Cloud’s City of London network are back-hauled to seven 100Mbit/s fibre PoPs (Points of Presence) using 5.6GHz radio. Thus each node has two transceivers. The first is the Wi-Fi transceiver with a 2.4GHz antenna trained on the appropriate territory. The second is a 5.6GHz transceiver pointing to the next node where the traffic daisy chains back to the fibre PoP effectively creating a true mesh network (Incidentally, backhaul is one of the main uses of WiMax technology). I won’t talk about the strengths and weaknesses of mesh radio networks here but will write post on this subject at a future date.

According to Niall, the tricky part of the build was to find appropriate sites for the nodes. You might think this was purely due to radio propagation issues but there was also the issue that the physical assets they were using didnt always turn out to be where they appeared to be on the maps! “We ended up arriving at the street lamp indicated on the map and it was not there!” This is the same as many carriers who also do not know where some of their switches are located or do not know how many customer leased lines they have in place.

Another interesting anecdote was concerned with the expectations of journalists at the launch of the network. “Because we were talking about ubiquitous coverage, many thought they could jump in a cab and watch Joost streaming video as they weaved their way around the city“. Oh, it didn’t work then I say to Niall expecting him to say that they were disappointed.. “No” he said, “it absolutely worked!

Niall says the network is up and running and working according to their expectations. “there is still a lot of tuning and optimisation to do but we are comfortable with the performance.

Incidentally, The Cloud owns the network and works with the Corporation of London as the landlord.

Round up

The Cloud has seemingly really achieved a lot this year with the roll out of the city centre networks and the sign up of 6 to 7 thousand users in London alone. This was backed up by the launch of UltraWiFi, a flat rate service costing £11.99 pounds per month.

Incidentally, The Cloud do not see themselves in competition with cable companies or mobile operators concentrating as they do on providing pure Wi-Fi access to individuals on the move. Although in many ways it actually does.

They operate in the UK, Sweden, Denmark, Norway, Germany and The Netherlands. Theyre also working with a wide array of service providers, including O2, Vodafone, Telenor, BT, iPass, Vonage, Nintendo amongst others.

The big challenge ahead, as I’m sure they would acknowledge, is how they are going to ramp up revenues and take their business into the big time. I am confident that they are well able to accept this challenge and exceed it. All I know is that public Wi-Fi access is a crucial capability in this connected world and without it the Internet world will be a much less exciting and usable place.


IPv6 to the rescue – eh?

June 21, 2007

To me, IPv6 is one of the Internet’s real enigmas as the supposed replacement of the the Internet’s ubiquitous IPv4. We all know this has not happened.

The Internet Protocol (IPv4) is the principle protocol that lies behind the Internet and it originated before the Internet itself. In the late 1960s there was a need in a number of US universities to exchange data and an interest in developing the new network technologies, switching capabilities and protocols required to achieve this.

The result of this was the formation of the Advanced Research Project Agency a US government body who started developing a private network called ARPANET which metamorphosed into the Defense Advanced Research Projects Agency (DARPA). The initial contract to develop the network was won by Bolt, Beranek and Newman (BBN) which was eventually bought by Verizon and sold to two private equity companies in 2004 to be renamed BBN Technologies.

The early services required by the university consortium were file transfer, email and the ability to remotely log onto university computers. The first version of the protocol was called the Network Control Protocol (NCP) and saw the light of day in 1971.

In 1973, Vince Cerf, who worked on NCP (now Chief Internet Evangelist at Google), and Robert Kahn ( who previously worked on the Interface Message Processor [IMP]) kicked off a program to design a next generation networking protocol for the ARPANET. This activity resulted in the the standardisation through ARPANET Requests For Comments (RFCs) of TCP/IPv4 in 1981 (now IETF RFC 760).

IPv4 uses a 32-bit address structure which we see most commonly written in dot-decimal notation such as aaa.bbb.ccc.ddd representing a total of 4,294,967,296 unique addresses. Not all of these are available for public use as many addresses are reserved.

An excellent book that pragmatically and engagingly goes through the origins of the Internet in much detail is Where Wizards Stay Up Late – it’s well worth a read.

The perceived need for upgrading

The whole aim of the development of of IPv4 was to provide a schema to enable global computing by ensuring that computers could uniquely identify themselves through a common addressing scheme and are able to communicate in a standardised way.

No matter how you look at it, IPv4 must be one of the most successful standardisation efforts to have ever taken place if measured by its success and ubiquity today. Just how many servers, routers, switches, computers, phones, and fridges are there that contain an IPv4 protocol stack? I’m not too sure, but it’s certainly a big, big number!

In the early 1990s, as the Internet really started ‘taking off’ outside of university networks, it was generally thought that the IPv4 specification was beginning to run out of steam and would not be able to cope with the scale of the Internet as the visionaries foresaw. Although there were a number of deficiencies, the prime mover for a replacement to IPv4 came from the view that the address space of 32 bits was too restrictive and would completely run out within a few years. This was foreseen because it was envisioned, probably not wrongly, that nearly every future electronic device would need its own unique IP address and if this came to fruition the addressing space of IPv4 would be woefully inadequate.

Thus the IPv6 standardisation project was born. IPv6 packaged together a number of IPv4 enhancements that would enable the IP protocol to be serviceable for the 21st century.

Work was started 1992/3 and by 1996 a number of RFCs were released starting with RFC 2460. One of the most important RFCs to be released was RFC 1933 which specifically looked at the transition mechanisms of converting IPv4 networks to IPv6. This covered the ability of routers to run IPv4 and IPv6 stacks concurrently – “dual stack” – and the pragmatic ability to tunnel the IPv6 protocol over ‘legacy’ IPv4 based networks such as the Internet.

To quote RFC 1933:

This document specifies IPv4 compatibility mechanisms that can be implemented by IPv6 hosts and routers. These mechanisms include providing complete implementations of both versions of the Internet Protocol (IPv4 and IPv6), and tunnelling IPv6 packets over IPv4 routing infrastructures. They are designed to allow IPv6 nodes to maintain complete compatibility with IPv4, which should greatly simplify the deployment of IPv6 in the Internet, and facilitate the eventual transition of the entire Internet to IPv6.

The IPv6 specification contained a number of areas of enhancement:

Address space: Back in the early 1990s there was a great deal of concern about the lack of availability of public IP addresses. With the widespread uptake of IP rather than ATM as the basis of enterprise private networks as discussed in a previous post The demise of ATM, most enterprises had gone ahead and implemented their networks with any old IP address they cared to use. This didn’t matter at the time because those networks were not connected to the public Internet so it did’nt matter whether other computers or routers had selected the same addresses.

It first became a serious problem when two divisions of a company tried to interconnect within their private network and found that both divisions had selected the same default IP addresses and could not connect. This was further compounded when those companies wanted to connect to the Internet and found that their privately selected IP addresses could not be used in the public space as they had been allocated to other companies.

The answer to this problem was to increase the IP protocol addressing space to accommodate all the private networks coming onto the public network. Combined with the vision that every electronic device could contain an IP stack, IPv6 increased the address space to 128 bits rather than IPv4’s 32 bits.

Headers: Headers in IPv4 (headers precede data in the packet flow and contain routing and other information about the data) were already becoming unwieldy so the addition of extra data in the headers necessitated by IPv6 would not help things by increasing a minimum 20byte header to 80 bytes. IPv6 headers are simplified by enabling headers to be chained together and only used when needed. IPv4 has a total of 10 fields, while IPv6 has only 6 and no options.

Configuration: Managing an IP network is pretty much of a manual exercise with few tools to automate the activity beyond tools such as DCHP (the automatic allocation of IP addresses for computers). Network administrators seem to spend most of the day manually entering IP addresses into fields in network management interfaces which really does not make much use of their skills.

IPv6 has incorporated enhancements to enable a ‘fully automatic’ mode where the protocol can assign an address to itself without human intervention. The IPv6 protocol will send out a request to enquire whether any other device has the same address. If it receives a positive reply it will add a random offset and ask again until it receives no rely. IPv6 can also identify nearby routers and automatically identify if a local DHCP server ID available.

Quality of Service: IPv6 has embedded enhancements to enable the prioritisation of certain classes of traffic by assigning a value to a packet in the field labelled Drop Priority.

Security: IPv6 incorporates IP-Sec to provide authentication and encryption to improve the security of packet transmission and is handled by the Encapsulating Security Payload (ESP).

Multicast: Multicast addresses are group addresses so that packets can be sent to a group rather than an individual. IPv4 handles this very inefficiently while IPv6 has implemented the concept of a multicast address into its core.

So why aren’t we all using IPv6?

The short answer to this question is that IPv4 is a victim of its own success. The task of migrating the Internet to IPv6, even taking into to account the available migration options of dual stack hosting and tunnelling, is just too challenging.

As we all know, the Internet is made up of thousands of independently managed networks each looking to commercially thrive or often just to survive. There is no body overseeing how the Internet is run except for specific technical aspects such as Domain Name Server (DNS) management or the standards body, IETF. (Picture credit: The logo of Linux IPv6 Development Project)

No matter how much individual evangelists push for the upgrade, getting the world to do so is pretty much an impossible task unless everyone sees that there is a distinct commercial and technical benefit for them to do so.

This is the core issue and as the benefits of upgrading to IPv6 have been seriously eroded by the advent of other standards efforts that address each of the IPv6 enhancements on a stand-alone basis. The two principle are NAT and MPLS.

Network address translation (NAT): To overcome the limitation in the number of available public addresses, NAT was implemented. This means that many users / computers in a private network are able to access the public Internet using a single public IP address. Each user is assigned a transient dynamic session IP address when they access the Internet and the NAT software manages the translation between the the public IP address and the dynamic address used within the private network.

NAT effectively addressed the concern that the Internet may run out of address space. It could be argued that NAT is just a short term solution that came at a big cost to users. The principle downside is that external connections are unable to set up long term relationships with an individual user or computer that is behind a NAT wall as they have not been assigned their own unique IP address. Users of the internal dynamically assigned IP addresses can change at any time.

This particularly affects applications that contain addresses so that traffic can always be sent to a specific individual or computer – VoIP is probably the main victim.

It’s interesting to note that the capability to uniquely identify individual computers was the main principle behind the development of IPv4 so it quite easy to see why there is often strong views expressed about NAT!

MPLS and related QoS standards: The advent of MPLS covered in The rise and maturity of MPLS and MPLS and the limitations of the Internet addressed many of the needs of the IP community to be able to address Quality of Service issues by separating high-priority service traffic from low-priority traffic.

Round up

Don’t break what works. IP networks take a considerable amount of skill and hard work to keep alive. They always seem to be ‘living on the edge’ and break down when a network administrator gets distracted. Leave well alone is the mantra by many operational groups.

The benefits of upgrading to IPv6 have been considerably eroded by the advent of NAT and MPLS. Combine this with the lack of an overall management body who could force through a universal upgrade and the innate inertia of carriers and ISPs probably means that IPv6 will never achieve such a dominant position as its progenitor IPv4.

According to one overview of IPv6, which gets to the heart of the subject, “Although IPv6 is taking its sweet time to conquer the world, it’s now showing up in more and more places, so you may actually run into it one of these days.”

This is not to say that IPv6 is dead, rather it is being marginalised by only being run in closed networks (albeit some rather large networks). There is real benefit to the Internet being upgraded to IPv6 as every individual and every device connected to it could be assigned its own unique address as envisioned by the Founders of the Internet. The inability to do this severely constrains services and applications which are not able to clearly identify an individual on an on-going basis as is inherent in a telephone number. This clearly reflects badly on the Internet.

IPv6 is a victim of the success of the Internet and the ubiquity of IPv4 and will probably never replace IPv4 in the Internet in the foreseeable future (Maybe I should never say never!). I was once asked by a Cisco Fellow how IPv6 could be rolled out, after shrugging my shoulders and laughing I suggested that it needed a Bill Gates of the Internet to force through the change. That suggestion did not go down too well. Funnily enough, now that IPv6 is incorporated into Vista we could see the day when this happens. The only fly in the ointment is that Vista has the same problems and challenges as IPv6 in replacing XP – users are finally tiring of never-ending upgrades with little practical benefit.

Interesting times.


sip, Sip, SIP – Gulp!

May 22, 2007

Session Initiation Protocol or ‘SIP’ as it is known has become a major signalling protocol in the IP world as it lies at the heart of Voice-over-IP (VoIP). It’s a term you can hardly miss as it is supported by every vender of phones on the planet (Picture credit: Avaya: An Avaya SIP phone).

Many open software groups have taken SIP to the heart of their initiatives and an example of this is IP Multimedia Subsystem (IMS) which I recently touched upon in IP Multimedia Subsystem or bust!

SIP is a real-time IP applications layer protocol that sits alongside HTTP, FTP, RTP and other well known protocols used to move data through the Internet. However it is an extremely important one because it enables SIP devices to discover, negotiate, connect and establish communication sessions with other SIP enabled devices.

SIP was co-authored in 1996 by Jonathan Rosenberg who is now a Cisco Fellow, Henning Schulzrinne who is Professor and Chair in the Dept. of Computer Science at Columbia University and Mark Handley who is Professor of Networked Systems at UCL. SIP became an IETF SIP Working Group which is still supporting the RFC 3261 standard. SIP was originally used on the US experimental Multicast network commonly known as Mbone. This makes SIP an IT /IP standard rather than one developed by the communications industry.

Prior to SIP, voice signalling protocols were essentially proprietary signalling protocols aimed at use by the big telecommunications companies on their big Public Switched Telecommunications Networks (PSTN) voice networks such as SS7 (C7 in the UK). With the advent of the Internet and the ‘invention’ of Voice over IP, it soon became clear that a new signalling protocol was required that was peer-to-peer, scalable, open, extensible, lightweight and simple in operation that could be used on a whole new generation of real-time communications devices and services that are running over the Internet.

SIP itself is based on earlier IETF / Internet standards, principally Hypertext Transport Protocol (HTTP) which is the core protocol behind the World Wide Web.

Key features of SIP

The SIP signalling standard has many key features:

Communications device identification: SIP supports a concept known as Address of Record (AOR) which represents a user’s unique address in the world of SIP communications. An example of an AOR is sip: xxx@yyy.com. To enable a user to have multiple communications devices or services, SIP has a mechanism called a Uniform resource Identifier (URI). A URI is like the Uniform Resource Locator (URL) used to identify servers on the world wide web. URIs can be used to specify the destination device of a real-time session e.g.

  • IM: sip: xxx@yyy.com (Windows Messenger uses SIP)
  • Phone: sip: 1234 1234 1234@yyy.com; user=phone
  • FAX: sip: 1234 1234 1235@yyy.com; user=fax

A SIP URI can use both traditional PSTN numbering schemes AND alphabetic schemes as used on the Internet.

Focussed function: SIP only manages the set up and tear down of real time communication sessions, it does not manage the actual transport of media data. Other protocols undertake this task.

Presence support: SIP is used in a variety of applications but has found a strong home in applications such as VoIP and Instant Messaging (IM). What makes SIP interesting is that it is not only capable of setting up and tearing down real time communications sessions but also supports and tracks a user’s availability through the Presence capability. The open presence standard Jabber uses SIP. I wrote about presence in – The magic of ‘presence’.

Presence is supported through a key SIP extension: SIP for Instant messaging and Presence Leveraging Extensions (SIMPLE) [a really contrived acronym!]. This allows a user to state their status as seen in most of the common IM systems. AOL Instant Messenger is shown in the picture on the left.

SIMPLE means that the concept of Presence can be used transparently on other communications devices such as mobile phones, SIP phones, email clients and PBX systems.

User preference: SIP user preference functionality enables a user to control how a call is handled in accordance to their preferences. For example:

  • Time of day: A user can take all calls during office hours but direct them to a voice mail box in the evenings.
  • Buddy lists: Give priority to certain individuals according to a status associated with each contact in an address book.
  • Multi-device management: Determine which device / service is used to respond to a call from particular individuals.

PSTN mapping: SIP can manage the translation or mapping of conventional PSTN numbers to SIP URIs and vice versa. This capability allows SIP sessions to transparently inter-work with the PSTN. There are organisations, such as ENUM, who provide appropriate database capabilities. To quote ENUM’s home page:

“ENUM unifies traditional telephony and next-generation IP networks, and provides a critical framework for mapping and processing diverse network addresses. It transforms the telephone number—the most basic and commonly-used communications address—into a universal identifier that can be used across many different devices and applications (voice, fax, mobile, email, text messaging, location-based services and the Internet).”

SIP trunking: SIP trunks enable enterprises to group inter-site calls using a pure IP network. This could use an IP-VPN over an MPLS-based network with a guaranteed Quality of Service. Using SIP trunks could lead to significant cost saving when compared to using traditional E1 or T1 leased lines.

Inter-island communications: In a recent post, Islands of communication or isolation? I wrote about the challenges of communication between islands of standards or users. The adoption of SIP-based services could enable a degree of integration with other companies to extend the reach of what, to date, have been internal services.

Of course, the partner companies need to have adopted SIP as well and have appropriate security measures in place. This is where the challenge would lay in achieving this level of open communications! (Picture credit: Zultys: a Wi-Fi SIP phone)

SIP servers

SIP servers are the centralised capability that manage establishment of communications sessions by users. Although there are many types of server, they are essentially only software processes and could be run on a single processor or device. There are several types of SIP server:

Registrar Server: The registrar server authenticates and registers users as soon as they come on-line. It stores identities and the list of devices in use by each user.

Location Server: The location server keeps track of users’ locations as they roam and provides this data to other SIP servers as required.

Redirect Server: When users are roaming, the Redirect Server maps session requests to a server closer to the user or an alternate device.

Proxy Server: SIP Proxy servers pass on SIP requests that are located either downstream or upstream.

Presence Server: SIP presence servers enable users to provide their status (presentities) to other users who would like to see it (Watchers).

Call setup Flow

The diagram below shows the initiation of a call from the PSTN network (section A), connection (section B) and disconnect (section C). The flow is quite easy to understand. One of the downsides is that if a complex session is being set up it’s quite easy to get up to 40 to 50+ separate transactions which could lead to unacceptable set-up times being experienced – especially if the SIP session is being negotiated across the best-effort Internet.

(Picture source: NMS Communications)

Round-up

As a standard SIP has had a profound impact on our daily lives and lives well along those other protocol acronyms that have fallen into the daily vernacular such as IP, HTTP, www and TCP. Protocols that operate at the application level seem to be so much more relevant to our daily lives than those that are buried in the network such as MPLS and ATM.

There is still much to achieve by building capability on top of SIP such as federated services and more importantly interoperability. Bodies working on interoperability are SIPcenter, SIP Forum, SIPfoundry, SIP’it and IETF’s SPEERMINT working group. More fundamental areas under evaluation are authentication and billing.

More depth information about SIP can be found at http://www.tech-invite.com, a portal devoted to SIP and surrounding technologies.

Next time you just buy a SIP Wi-Fi phone from your local shop, install it, find that it works first time AND saves you money, just think about all the work that has gone into creating this software wonder. Sometimes, standards and open software hit a home run. SIP is just that.

Adendum #1:Do you know your ENUM?


IP Multimedia Subsystem or bust!

May 10, 2007

I have never felt so uncomfortable about writing about a subject as I am now while contemplating IP Multimedia Subsystem (IMS). Why this should be I’m not quite sure.

Maybe it’s because one of the thoughts it triggers is the subject of Intelligent Networks (IN) that I wrote about many years ago – The Magic of Intelligent Networks. I wrote at the time:

“Looking at Intelligent Networks from an Information Technology (IT) perspective can simplify the understanding of IN concepts. Telecommunications standards bodies such as CCITT and ETSI have created a lot of acronyms which can sometimes obfuscate what in reality is straightforward.”

This was an initiative to bring computers and software to the world voice switches that would enable carriers to develop advanced consumer services on their voice switches and SS7 signalling networks. To quote an old article:

“Because IN systems can interface seamlessly between the worlds of information technology and telecommunications equipment, they open the door to a wide range of new, value added services which can be sold as add-ons to basic voice service. Many operators are already offering a wide range of IN-based services such as non-geographic numbers (for example, freephone services) and switch-based features like call barring, call forwarding, caller ID, and complex call re-routing that redirects calls to user-defined locations.”

Now there was absolutely nothing wrong with that vision and the core technology was relatively straightforward (database lookup number translation). The problem in my eyes was that it was presented as a grand take-over-the-world strategy and a be-all-and-and-all vision when in reality it was a relatively simple idea. I wouldn’t say IN died a death, it just fizzled out. It didn’t really disappear as such, as most of the IN related concepts became reality over time as computing and telephony started to merge. I would say it morphed into IP telephony.

Moreover, what lay at the heart of IN was the view that intelligence should be based in the network, not in applications or customer equipment. The argument about dumb networks versus Intelligent networks goes right back to the early 1990s and is still raging today – well at least simmering.

Put bluntly, carriers laudably want intelligence to be based in the network so they are able to provide, manage and control applications and derive revenue that will compensate for plummeting Plain Old Telephony Services (POTS) services. Whereas most IT and Internet people do not share this vision as they believe it holds back service innovation which generally comes from small companies. There is a certain amount of truth in this view as there are clear examples of where this is happening today if we look at the fixed and mobile industries.

Maybe I feel uncomfortable with the concept of IMS as it looks like the grandchild of IN. It certainly seems to suffer from the same strengths and weaknesses that affected its progenitor. Or, maybe it’s because I do not understand it well enough?

What is IP Multimedia Subsystem (IMS)?

IMS is an architectural framework or reference architecture – not a standard – that provides a common method for IP multiple media ( I prefer this term to multimedia) services to be delivered over existing terrestrial or wireless networks. In the IT world – and the communications world come to that – a good part of this activity could be encompassed using the term middleware. Middleware is an interface (abstraction) layer that sits between the networks and applications / services that provides a common Application Programming Interface (API).

The commercial justification of IMS is to enable the development of advanced multimedia applications whose revenue would compensate for dropping telephony revenues and the reduce customer churn.

The technical vision of IMS is about delivering seamless services where customers are able to access any type of service, from any device they want to use, with single sign-on, with common contacts and fluidity between wire line and wireless services. IMS has ambitions about delivering:

  • Common user interfaces for any service
  • Open application server architecture to enable a ‘rich’ service set
  • Separate user data from services for cross service access
  • Standardised session control
  • Inherent service mobility
  • Network independence
  • Inter-working with legacy IN applications

One of the comments I came across on the Internet from a major telecomms equipment vendor was that IMS was about the “Need to create better end-user experience than free-riding Skype, Ebay, Vonage, etc.”. This, in my opinion, is an ambition too far as innovative services such as those mentioned generally do not come out of the carrier world.

Traditionally each application or service offered by carriers sit alone in their own silos calling on all the resources they need, using proprietary signalling protocols, and running in complete isolation to other services each of which sit in their own silo. In many ways this reflects the same situation that provided the motivation to develop a common control plane for data services called GMPLS. Vertical service silos will be replaced with horizontal service, control and transport layers.


Removal of service silos
Source: Business Communications Review, May 2006

As with GMPLS, most large equipment vendors are committed to IMS and supply IMS compliant products. As stated in the above article:

“Many vendors and carriers now tout IMS as the single most significant technology change of the decade… IMS promises to accelerate convergence in many dimensions (technical, business-model, vendor and access network) and make “anything over IP and IP over everything” a reality.

Maybe a more realistic view is that IMS is just an upgrade to the softswitch VoIP architecture outlined in the 90s – albeit being a trifle more complex. This is the view of Bob Bellman, in an article entitled From Softswitching To IMS: Are We There Yet? Many of the  core elements of a softswitch architecture are to be found in the IMS architecture including the separation of the control and data planes.

VoIP SoftSwitch Architecture
Source: Business Communications Review, April 2006

Another associated reference architecture that is aligned with IMS and is being popularly pushed by software and equipment vendors in the enterprise world is Service Oriented Architecture (SOA) an architecture that focuses on services as the core design principle.

IMS has been developed by an industry consortium and originated in the mobile world in an attempt to define an infrastructure that could be used to standardise the delivery of new UMTS or 3G services. The original work was driven by 3GPP2 and TISPAN. Nowadays, just about every standards body seems to be involved including Open Mobile Alliance, ANSI, ITU, IETF, Parlay Group and Liberty Alliance – fourteen in total.

Like all new initiatives, IMS has developed its own mega-set of of T/F/FLAs (Three, four and five letter acronyms) which makes getting to grips with the architectural elements hard going without a glossary. I won’t go into this much here as there are much better Internet resources available: The reference architecture focuses on a three layer model:

#1 Applications layer:

The application layer contains Application Servers (AS) which host each individual service. Each AS communicated to the control plane using Session Initiation Protocol (SIP).  Like GSM, an AS can interrogate a database of users to check authorisation. The database is called the Home Subscriber Server (HSS) or an HSS in a 3rd party network if the user is roaming 9In GSM this is called the Home Location Register (HLR).

(Source: Lucent Technologies)

The application layer also contains Media Servers for storing and playing announcements and other generic applications not delivered by individual ASs, such as media conversion.

Breakout Gateways provide routing information based on telephone number looks-ups for services accessing a PSTN. This is similar functionality to that was found in IN systems discussed earlier.

PSTN gateways are used to interface to PSTN networks and include signalling and media gateways.

#2 Control layer:

The control plane hosts the HSS which is the master database of user identities and the individual calls or service sessions currently being used by each user. There are several roles that a SIP call / session controller can undertake:

  • P-CSCF (Proxy-CSCF) This provides similar functionality as a proxy server in an Intranet
  • S-CSCF (Serving-CSCF) This is the core SIP server always located in the home node
  • I-CSCF (Interrogating-CSCF) This is a SIP server located at a network’s edge and it’s address can be found in DNS servers by 3rd party SIP servers.

#3 Transport layer:

IMS encompasses any services that uses IP / MPLS as transport and pretty much all of the fixed and mobile access technologies including ADSL, cable modem DOCSIS, Ethernet, Wi-Fi, WIMAX and CDMA wireless. It has little choice in this matter as if IMS is to be used it needs to incorporate all of the currently deployed access technologies. Interestingly, as we saw in the DOCSIS post – The tale of DOCSIS and cable operators, IMS is also focusing on the of IPv6 with IPv4 ‘only’ being supported in the near term.

Roundup

IMS represents a tremendous amount of work spread over six years and uses as many existing standards as possible such as SIP and Parlay. IMS is work in progress and much still needs to be done – security and seamless inter-working of services are but two.

All the major telecommunications software, middleware and integrators are involved and just thinking about the scale of the task needed to put in place common control for a whole raft of services makes me wonder about just how practical the implementation of IMS actually is? Don’t take me wrong, I am a real supporter of the these initiatives because it is hard to come up with an alternative vision that makes sense, but boy I’m glad that I’m not in charge of a carrier IMS project!

The upsides of using IMS in the long term are pretty clear and focus around lowering costs, quicker time to market, integration of services and, hopefully, single log-in.

It’s some of the downsides that particularly concern me:

  • Non-migration of existing services: Like we saw in the early days of 3G, there are many services that would need to come under the umbrella of an IMS infrastructure such as instant conferencing, messaging, gaming, personal information management, presence, location based services, IP Centrex, voice self-service, IPTV, VoIP and many more. But, in reality, how do you commercially justify migrating existing services in the short term onto a brand new infrastructure – especially when that infrastructure is based on a non-completed reference architecture?

    IMS is a long term project that will be redefined many times as technology changes over the years. It is clearly an architecture that represents a vision for the future that can be used to guide and converge new developments but it will many years before carriers are running seamless IMS based services – if they ever will.

  • Single vendor lock-in: As with all complicated software systems, most IMS implementations will be dominated by a single equipment supplier or integrator. “Because vendors won’t cut up the IMS architecture the same way, multi-vendor solutions won’t happen, Moreover, that single supplier is likely to be an incumbent vendor.” This was quoted by Keith Nissen from InStat in a BCR article.
  • No launch delays: No product manager would delay the launch of a new service on the promise of jam tomorrow. While the IMS architecture is incomplete, services will continue to be rolled out without IMS further inflaming the Non-migration of existing services issue raised above.
  • Too ambitious: Is the vision of IMS just too ambitious? Integration of nearly every aspect of service delivery will be a challenge and a half for any carrier to undertake. It could be argued that while IT staff are internally focused getting IMS integration sorted they should be working on externally focused services. Without these services, customers will churn no matter how elegant a carrier’s internal architecture may be. Is IMS, Intelligent Networks reborn to suffer the same fate?
  • OSS integration: Any IMS system will need to integrate with carrier’s often proprietary OSS systems. This compounds the challenge of implementing even a limited IMS trial.
  • Source of innovation: It is often said that carriers are not the breeding ground of new, innovative services. This lies with small companies on the Internet creating Web 2.0 services that utilise such technologies as presence, VoIP and AJAX today. Will any of these companies care whether a carrier has an IMS infrastructure in place?
  • Closed shops – another walled garden?: How easy will it be for external companies to come up with a good idea for a new service and be able to integrate with a particular carrier’s semi-proprietary IMS infrastructure?
  • Money sink: Large integration projects like IMS often develop a life of their own once started and can often absorb vast amounts of money that could be better spent elsewhere.

I said at the beginning of the post that I felt uncomfortable about writing about IMS and now that I’m finished I am even more uncomfortable. I like the vision – how could I not? It’s just that I have to question how useful it will be at the end of the day and does it divert effort, money and limited resource away from where they should be applied – on creating interesting services and gaining market share. Only time will tell.

Addendum:  In a previous post, I wrote about the IETF’s Path Computation Element Working Group and it was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?


The tale of DOCSIS and cable operators

May 2, 2007

When anyone that uses the Internet on a regular basis is presented with an opportunity to upgrade their access speeds they will usually jump at the opportunity without a second thought. There used to be a similar analogy with personal computers with operating systems and processor speeds, but this is a less common trend these days as the benefits to be gained are often ephemeral as we have recently seen with Microsoft’s Vista. (Picture: SWINOG)

However, the advertising headline for many ISPs still focuses on “XX Mbit/s for as little as YY Pounds/month”. Personally, in recent years, I have not seen too many benefits in increasing my Internet access speed because I see little improvement when browsing normal WWW sites as their performance are not now bottlenecked by my access connection but rather the performance of servers. My motivation to get more bandwidth into my home is the need to have sufficient bandwidth – both upstream and downstream – to support my family’s need to use multiple video and audio services at the same time. Yes, we are as dysfunctional as everyone else with computers in nearly every room of the house and everyone wanting to do their own video or interactive thing.

I recently posted an overview of my experience of Joost, the new ‘global’ television channel recently launched by Skype founders, Niklas Zennstrom and Janus Friis – Joost’s beta – first impressions and it’s interesting to note that as a peer-to-peer system it does require significant chunks of your access bandwidth as discussed in Joost: analysis of a bandwidth hog.

The author’s analysis shows that it “pulls around 700 kbps off the internet and onto your screen” and “sends a lot of that data on to other users – about 220 kbps upstream”. If Joost is a window on the future of the IPTV on the Internet, then its should be of concern to the ISP and carrier communities and it should also be of concern to each of us that uses it. 220kbits/s is a good chunk of of the 250kbit/s upstream capability of ADSL-based broadband connections. If the upstream channel is clogged, response time on all services being accessed will be affected. Even more so if several individuals are are access Joost of a single broadband connection.

It’s these issues that make me want to upgrade my bandwidth and think about the technology that I could use to access the Internet. In this space there has been an on-going battle for many years between twisted copper pair ADSL or VDSL used by incumbent carriers and cable technology used by competitive cable companies such as Virgin Media to deliver Internet to your home.

Cable TV networks (CATV) have come a long way since the 60s when they were based on simple analogue video distribution over coaxial cable. These days they are capable of delivering multiple services and are highly interactive allowing in-band user control of content unlike satellite delivery that requires a PSTN based back-channel. The technical standard that enables these services is developed by CableLabs and is called Data Over Cable Service Interface Specification (DOCSIS). This defines the interface requirements for cable modems involved in high-speed data distribution over cable television system networks.

The graph below shows the split between ADSL and Cable based broadband subscribers: (Source: Virgin Media) with Cable trailing ADSL to a degree. The link provided provides an excellent overview of the UK broadband market in 2006 so I won’t comment further here.

A DOCSIS based broadband cable system is able to deliver a mixture of MPEG-based video content mixed with IP enabling the provision of a converged service as required in 21st century homes. Cable systems operate in a parallel universe, well not quite, but they do run a parallel spectrum enclosed within their cable network isolated from the open spectrum used by terrestrial broadcasters. This means that they are able to change standards when required without the need to consider other spectrum users as happens with broadcast services.

The diagram below shows how the spectrum is split between upstream and downstream data flows (Picture: SWINOG) and various standards specify the data modulation (QAM) and bit-rate standards. As is usual in these matters, there are differences between the USA and European standards due to differing frequency allocations and standards – NTSC in the USA and PAL in Europe. Data is usually limited to between 760 and 860MHz.

The DOCSIS standard has been developed by CableLabs and the ITU with input from a multiplicity of companies. The customer premises equipment is called a Cable Modem and the Central Office (Head End) equipment is called the a cable modem termination system (CMTS).

Since 1997there have been various releases (Source: CableLabs) of the DOCSIS standard with the most recent being version 3.0 being released in 2006.

DOCSIS 1.0 (Mar. 1997) (High Speed Internet Access) Downstream: 42.88 Mbit/s and Upstream: 10.24 Mbit/s

  • Modem price has declined from $300 in 1998 to <$30 in 2004

DOCSIS 1.1 (Apr. 1999) (Voice, Gaming, Streaming)

  • Interoperable and backwards-compatible with DOCSIS 1.0
  • “Quality of Service”
  • Service Security: CM authentication and secure software download
  • Operations tools for managing bandwidth service tiers

    DOCSIS 2.0 (Dec. 2001) (Capacity for Symmetric Services) Downstream: 42.88 Mbit/s and Upstream:30.72 Mbit/s

    • Interoperable and backwards compatible with DOCSIS 1.0 / 1.1
    • More upstream capacity for symmetrical service support
    • Improved robustness against interference (A-TDMA and S-CDMA)

    DOCSIS 3.0 (Aug. ’06) Downstream: 160 Mbit/s and Upstream: 120 Mbit/s

    • Wideband services provided by expanding used bandwidth through the use of channel bonding e.g. instead of a single data channel being delivered over a single channel, they are multiplexed over a number of channels. ( A previous post talked about bonding in the ADSL world Sharedband: not enough bandwidth? )
    • Support of IPv6

    Roundup

    With the release of the DOCSIS 3.0 standard it looks like cable companies around the world are now set to be able to upgrade the bandwidth they will be able to offer to their customers in coming years. However, this will be an expensive upgrade for them to undertake with the need to upgrade head end equipment first and then followed by field cable modem upgrades over time. I would hazard a guess that it will be at least five years before the average cable user will be able to see the benefits.

    I also wonder about what price will need to be paid for the benefit of gaining higher bandwidth through channel bonding when there is limited spectrum available for data services on the cable system. A limit in subscriber number scalability?

    I was also interested to read about the possible adoption of IPv6 in DOCSIS 3.0. It was clear to me many years ago that IPv6 would ‘never’ (never say never!) on the Internet because of the scale of the task. It’s best chance would be in closed systems such as satellite access services and IPTV systems. Maybe, cable systems are an another option. I will catch up on IPv6 in a future post.


    Aria Networks shows the optimal path

    April 26, 2007

    In a couple of previous posts, Path Computation Element (PCE): IETF’s hidden jewel and MPLS-TE and network traffic engineering I mentioned a company called Aria Networks who are working in the technology space discussed in those posts. I would like to take this opportunity to write a little about them.

    Aria Networks are a small UK company that have been going for around eighteen months. The company is commercially led by Tony Fallows, the core technology has been developed by Dr Jay Perrett, Chief Science Officer and Head of R&D and Daniel King, Chief Operating Officer and their CTO is Adrian Farrel. Adrian currently co-chairs the IETF Common Control and Measurement Plane (CCAMP) working group that is responsible for GMPLS and also co-chairs of the IETF Path Computation Element (PCE) working groups.

    The team at Aria have brought some very innovative software technology the products they supply to network operators and the network equipment vendors. Their raison d’etre, as articulated by Daniel King, is to “to fundamentally change the way complex, converged networks are designed, planned and operated”. This is an ambitious goal, so let’s take a look at how Aria plan to achieve this.

    Aria currently supplies software that addresses the complex task of computing packet constrain-based paths across an IP or an MPLS network and optimising that network holistically and in parallel. Holistic is a key word in respect of understanding Aria products. It means that when an additional path needs to be computed in a network, the whole network and all the services that are running over it are recalculated and optimised in a single calculation. A simple example of why this is so important is shown here.

    This ability to compute holistically rather than on a piecemeal basis requires some very slick software as it is a very computationally intensive (‘hard’) computation that could easily take many hours using other systems. Parallel is the other key word. When an additional link is added to a network there could be a knock-on effect to any other link in the network, therefore re-computing all the paths in parallel – both existing and new – is the only way to ensure a reliable and optimal result is achieved.

    Traffic engineering of IP, MPLS or Ethernet networks could quite easily be dismissed by the non-technical management of a network operator as an arcane activity but, as anyone with experience of operating networks can vouch, good traffic engineering brings pronounced benefits that directly affects the reduction of costs while increasing the positive aspects of customers’ experience of using services. Of course, lack of appropriate traffic engineering activity has the opposite effect. Only one thing could be put above traffic engineering to better achieve a good brand image and that is good customer service. The irony is that if money is not spent on good traffic engineering, ten times the amount would need to be spent on call centre facilities papering over the cracks!

    One quite common view held by a number of engineers is that traffic engineering is not required because they say “we throw bandwidth at our network”. If a network has an abundance of bandwidth then in theory there will never be any delays caused by an inadvertent overload of a particular link. This may be true, but it is certainly an expensive and short sighted solution and one that could turn out to risky as new customers come on board. Combine it with the often slow provisioning times often associated with adding additional optical links can cause major network problems. The challenge of planning and optimising if significantly increased in a Next Generation Network (NGN) when traffic is actively segmented into different traffic classes such as real time VoIP and best-effort Internet access. Traffic engineering tools will become an even more indispensable tool than they have in the past.

    It’s interesting to note that even if a protocol like MPLS-TE, PBT, PBB-TE or T-MPLS has all the traffic bells and whistles any carrier may ever desire, it does not mean they can be used. Using TE extensions such as Fast ReRoute (FRR) need sophisticated tools or they quickly become unmanageable in a real network.

    Aria’s product family is called intelligent Virtual Network Topologies (iVNT). Current products are aimed at network operators that operate IP and / or MPLS-TE based networks.

    iVNT MPLS-TE enables network operators design, model and optimise MPLS – Traffic Engineered (MPLS-TE) networks that use constraint based point-to-point Labelled Switched Paths (LSPs), constraint based point-to-multipoint LSPs, Fast-Reroute (FRR) bypass tunnels. One of its real strengths is that it goes to town on supporting any type of constraint that could be placed on a link – delay, hop count, cost, required bandwidth, link-layer protection, path disjointedness, bi-directionality, etc. Indeed, it quite straight forward to add any addition constraints that an individual carrier may need.

    iVNT IP enables network operators to design, model and optimise IP and Label Distribution Protocol (LDP) networks based on the metrics used in Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System (IS-IS) Interior Gateway Protocols (IGPs) to ensure traffic flows are correctly balanced across the network. Although not using more advanced traffic engineering capabilities is clearly not the way to go in the future, many carriers still stick with ‘simple’ IP solutions – however they are far from simple in practise and can be an operational nightmare to manage.

    What makes Aria software so interesting?

    To cut to the chase, it’s the use of Artificial Intelligence (AI) when applied to path computation and network optimisation. Conventional algorithms used in network optimisation software are linear in nature and are usually deterministic in that they produce the same answer for the same set of variables every time they are run. They are usually ‘tuned’ to a single service type and are often very slow to produce results when faced with a very large network that uses many paths and carries many services. Aria’s software approach may produce different results but correct, results each time it is run and is able to handle multiple services that are inherently and significant different from a topology perspective e.g. point-to-point (P2P), Point -to-Multipoint (P2MP) based services and mesh-like IP-VPNs etc.

    Aria uses evolutionary and genetic techniques which are good at learning new problems and running multiple algorithms in parallel. The software then selects which algorithm is the better at solving the particular problem they are challenged with. The model evolves multiple times and quickly converges on the optimal solution. Importantly, the technology is very amenable to use parallel computing to speed up processing of complex problems such as required in holistic network optimisation.

    It generally does not make sense to use the same algorithm to solve all the network path optimisation needs of different services – iVNT runs many in parallel and self selection will show which is the most optimal for the current problem.

    Aria’s core technology is called DANI (Distributed Artificial Neural Intelligence) and is a “flexible, stable, proven, scalable and distributed computation platform”. DANI was developed by two of Aria’s Founders, Jay Perrett and Daniel King and has had a long proving ground in the pharmaceutical industry for pre-clinical drug discovery which needs the analysis of millions of individual pieces of data to isolate interesting combinations. The company that addresses the pharmaceutical industry is Applied Insilico.

    Because of the use of AI, iVNT is able to compute a solution for a complex network containing thousands of different constraint-based links, hundreds of nodes and multiple services such as P2P LSPs, Fast Reroute (FRR) links, P2MP links (IPTV broadcast) and meshed IP-VPN services in just a few minutes on one of today’s norebooks.

    What’s the future direction for Aria’s products?

    Step to multi-layer path computation: As discussed in the posts mentioned above, Aria is very firmly supportive of the need to provide automatic multi-layer path computation. This means that the addition of of a new customer’s IP service will be passed as a bandwidth demand the MPLS network and downwards to the GMPLS controlled ASTN optical network as discussed in GMPLS and common control.

    Path Computation Element (PCE): Aria are at the heart of the development of on-line path computation so if this is a subject of interest to you then give Aria a call.

    Two product variants address this opportunity:

    iVNT Inside is aimed at Network Management System (NMS) vendors, Operational Support System (OSS) vendors and Path Computation Element (PCE) vendors that have a need to provide advanced path computation capabilities embedded in their products.

    iVNT Element is for network equipment vendors that have a need to embed advanced path computation capabilities in their IP/MPLS routers or optical switches.

    Roundup

    Aria Networks could be considered to be a rare company in the world of start-ups. It has a well tried technology whose inherent characteristics are admirably matched to the markets and the technical problems it is addressing. Its management team are actively involved in developing the standards that their products are, or will be, able to support. This provides no better basis to get their products right.

    It is early days for carriers turning in their droves to NGNs and it is even earlier days for them to adopt on-line PCE in their networks, but Aria’s timing is on the nose as most carriers are actively thinking about these issues and are actively looking for tools today.

    Aria could be well positioned to benefit from the explosion of NGN convergence as it seems – to me at least – that fully converged networks will very challenging to design, optimise and operate without the new approach and tools from companies such as Aria.

    Note: I need to declare an interest as I worked with them for a short time in 2006.


    PONs are anything but passive

    April 18, 2007

    Passive Optical Networks (PONs) are an enigma to me in many ways. On one hand the concept goes back to late 1980s and has been floating around ever since with obligatory presentations from the large vendors whenever you visited them. Yes, for sure there are Pacific Rim countries and the odd state or incumbent carrier in the western world deploying the technology, but they never seemed to impact my part of the world.On the other hand, the technology would provide virtual Internet nirvana for me at home with 100Mbit/s second available to support video on demand to each member of my family who, in 21st century fashion, have computers in virtually every home of the house! This high bandwidth still seems as far away as ever with an average speed of 4.5Mbit/s downstream bandwidth in the UK. I see 7Mbit/s as I am close to a BT exchange. We are still struggling to deliver 21st century data services over 19th century copper wires using ATM-based Digital Subscriber Line (DSL) technology. If you are in the right part of the country, you get marginally higher rates from your cable company. Why are carriers not installing optical fibre to every home?

    To cut to the chase, it’s because of the immense costs of deploying it. Fibre to the Home (FTTH) as it is known, requires the installation of a completely new optical fibre infrastructure between the carrier’s exchanges and homes. Such an initiative would almost require a government led and paid for initiative to make it worthwhile – which of course is what has happened in the far east. Here in the UK this is further clouded by the existing cable industry which has struggled to reach profitability based on massive investments in infrastructure during the 90s.

    What are Passive Optical Networks (PONs)?

    The key word is passive. In standard optical transmission equipment, used in the core of public voice and data networks, all of the data being transported is switched using electrical or optical switches. This means that investment needs to be made in the network equipment to undertake that switching and that is expensive. In a PON, instead of electrical equipment joining or splitting optical fibres, fibres are just welded together at minimum cost – just like T-junctions in domestic plumbing. Light travelling down the fibre then splits or joins when it hits a splice. Equipment in the carrier’s exchange (or Central Office [CO]), and the customer’s home then multiplex or de-multiplex an individual customer’s data stream.

    Although the use of PONs considerably reduces equipment costs as no switching equipment is required in the field and hence no electrical power feeds are required, it is still an extremely expansive technology to deploy making it very difficult to create a business case that stacks up. A major problem is that there is often no free space available in existing ducts pushing carriers to a new digs. Digging up roads and laying the fibre is a costly activity. I’m not sure what the actual costs are these days, but £50 per metre dug used to be the cost many years ago.

    As seems to be the norm in most areas of technology, there are two PON standards slugging it out in the market place with a raft of evangelists attached to both camps. The first is Gigabit PON (GPON) and the second is Ethernet PON (EPON).

    About Gigabit PON (GPON)

    The concept of PONs goes back to the early 1990s to the time when the carrier world was focussed on a vision of ATM being the world’s standard packet or cell based WAN and LAN transmission technology. This never really happened as I discussed in The demise of ATM but ATM lives on in other services defined around that time. Two examples are broadband Asynchronous DSL (ADSL) and the lesser known ATM Passive Optical Network (APON).

    APON was not widely deployed and was soon superseded with the next best thing – Broadband PON (BPON) also known as ITU-T G.983 as it was developed under the auspices of the ITU. More importantly APON was limited to the number of data channels it could handle and BPON added Wave Division Multiplex (WDM) (covered in Technology Inside in Making SDH, DWDM and packet friendly). BPON uses one wavelength for 622Mbit/s downstream traffic and another for 155-Mbit/s upstream traffic.

    If there are 32 subscribers on the system, that bandwidth is divided among the 32 subscribers-plus overhead. Upstream, a BPON system provides 3 to 5 Mbits/sec when fully loaded.

    GPON is the latest upgrade from this stable, uses an SDH data framing standard and provides a data rate of 2.5Gbit/s downstream and 1.25-Gbit/s upstream. The big technical difference is that GPONs are based on Ethernet and IP rather than ATM.

    It is likely that GPON will find its natural home in the USA and Europe. An example is Verizon who is deploying 622Mbit/s BPON to its subscribers but is committed to upgrade to GPON within twelve months. In the UK, BT’s OpenReach has selected GPON for a trial.

    About Ethernet PON (EPON)

    EPONs comes from the IEEE stable and is called IEEE 802.3ah. EPONs are based on Ethernet standards and derives the benefits of using this commonly adopted technology. EPON only uses a single fibre between the subscriber split and the central office and does not require any power in the field such as needed if a kerb-side equipment was required. EPON also supports downstream Point to Multipoint (P2MP) broadcast which is very important for broadcasting video. As with carrier-grade Ethernet standards such as PBB, some core Ethernet features such as CSMA/CD have been dropped in this new use of Ethernet. Only one subscriber at a time is able to transmit at any time using a Time Division Multiplex Access (TDMA) protocol.

    Typical deployment is shown in the picture below, one fibre to the exchange connecting 32 subscribers.

    EPON architecture (Source: IEEE)

    A Metro Ethernet Forum overview of EPON can be found here.

    The Far East, especially Japan, has taken EPON to its heart with the vast majority being installed by NTT, the major Japanese incumbent carrier, followed by Korea Telecom with 100s of thousands of EPON connections.

    Roundup

    There is still lots of development taking place in the world of PONs. On one hand 10Gbit/s EPON is being talked about to give it an edge over 2.5Gbit/s GPON. On the other, WDM PONs are being trialled in the Far East which would enable far higher bandwidths to be delivered to each home. WDM-PON systems allocate a separate wavelength to each subscriber, enabling the delivery of 100 Mbits/s or more .

    Only this month it was announced that a Japanese MSO Moves 160 Mbit/s using advanced cable technology (the subject of a future TechnologyInside post).

    DSL based broadband suffers from a pretty major problem. The farther the subscriber is away their local exchange, the lower the data rate that can be supported reliably, PONs do not have this limitation (well technically they do but the distance is much greater). So in the race to increase data rates in the home PONs are a clear cut winner along with cable technologies such as DOCSIS 3.0 used by cable operators.

    Personally, I would not expect that PON deployment will increase over and above its snail-like pace in Europe at any time in the near future. Expect to see the usual trials announced by the largest incumbent carriers such as BT, FT and DT but don’t hold your breath waiting for it to arrive at your door. This has been questioned recently in a government report where the lack of high-speed internet access could jeopardise the UK’s growth in future years.

    You may think so what – “I’m happy with 2 – 7Mbit/s ADSL!”, but I can say with confidence that you should not be happy. The promise of IPTV services are really starting to be delivered at long last and encoding bandwidths of 1 to 2Mbit’s really do not cut the mustard in the quality race. This is case for standard, let alone for high definition TV. Moreover, with each family member having a computer and television in their own room and each wanting to watch or listen to their own programmes simultaneously, low speed ADSL connections are far from adequate.

    One way out of this is to bond multiple DSL lines together to gain that extra bandwidth. I wrote a post a few weeks ago – Sharedband: not enough bandwidth? – who provides software to do just this. The problem is that you would require an awful lot of telephone lines to get 100Mbit/s which is what I really want! Maybe I should emigrate?

    Addendum #1: Economist Intelligence Unit: 2007 e-readiness rankings


    GMPLS and common control

    April 16, 2007

    From small beginnings MultiProtocol Label Switching (MPLS) has come a long way in ten years. Although there are a considerable number of detractors who believe it costly and challenging to manage, it has now been deployed in just about all carriers around the world in one guise or another (MPLS-TE) as discussed in The rise and maturity of MPLS. Moreover, it is now extending its reach down the stack into the optical transmission world through activities such as T-MPLS covered in PBB-TE / PBT or will it be T-MPLS? (Picture: GMPLS: Architecture and Applications by Adrian Farrel and Igor Bryskin).In the same way that early SDH standards did not encompass appropriate support for packet based services as discussed in Making SDH, DWDM and packet friendly, initial MPLS standards were firmly focussed on IP networks not for use with optical wavelength or TDM switching.

    The promise of MPLS was to bring the benefits of a connection-oriented regime to the inherently connectionless word of IP networks and be able to send traffic along pre-determined paths thus improving performance. This was key for the transmission of real time or isochronous services such as VoIP over IP networks. Labels attached to packets enabled the creation of Label Switched paths (LSPs) which packets would follow through the network. Just as importantly, it was possible to specify the quality of service (QoS) of an LSP thus enabling the prioritisation of traffic based on importance.

    It was inevitable that MPLS would be extended to enable it to be applied to the optical world and this is where the IETF’s Generalised MPLS (GMPLS) standards comes in. Several early packet and data transmission standards bundled together signalling and data planes in vertical ‘stove-pipes’ creating services that needed to be managed from top to bottom completely separately from each other.

    The main vision of GMPLS was to create a common control plane that could be used across multiple services and layers thus considerably simplifying network management by automating end-to-end provisioning of connections and centrally managing network resources. In essence GMPLS extends MPLS to cover packet, time, wavelength and fibre domains. A GMPLS control plane also lies at the heart of T-MPLS replacing older proprietary optical Operational Support Systems (OSS) supplied by optical equipment manufacturers. GMPLS provides all the capabilities of those older systems and more.

    GMPLS is also often referred to as Automatic Switched Transport Network (ASTN) although GMPLS is really the control plane of an ASTN.

    GMPLS extends MPLS functionality by creating and provisioning:

    • Time Division Multiplex (TDM) paths, where time slots are the labels (SONET / SDH).
    • Frequency Division Multiplex (FDM) paths, where optical frequency such as seen in WDM systems is the label.
    • Space Division Multiplexed (SDM) paths, where the label indicates the physical position of data – photonic Cross-connects
    Switching Domain Traffic Type Forwarding Scheme Example of Device
    Packet, cell IP, ATM Label IP router, ATM switch
    Time TDM SONET/SDH Time slot Digital cross-connects
    Wavelength Transparent Lambda DWDM
    Physical space Transparent Fiber, line OXC

    GMPLS applicability

    GMPLS has extended and enhanced the following aspects of MPLS:

    • Signalling RSVP-TE and CR–LDP
    • Routing protocols – OSPF–TE and IS-IS-TE

    GMPLS has also added:

    • Extensions to accommodate the needs of SONET / SDH and optical networks.
    • A new protocol, link-management protocol (LMP), to manage and maintain the health of the control and data planes between two neighbouring nodes. LMP is an IP-based protocol that includes extensions to RSVP–TE and CR–LDP.

    As GMPLS is used to control highly dissimilar networks operating at different levels in the stack, there are a number of issues it needs to handle in a transparent manner:

    • It does not just forward packets in routers, but needs to switch in time, wavelength or physical ports (space) as well.
    • It should work with all applicable switched networks OTN, SONET / SDH, ATM and IP etc.
    • There are still many switches that are not able to inspect traffic and thus not able to extract labels – this is especially true for TDM and optical networks.
    • It should facilitate dissimilar network interoperation and integration.
    • Packet networks work at a finer granularity than optical networks – it would not make sense to allocate a 622Mbit/s SDH link to a 1Mbit/s video IP stream by mistake.
    • There is a significant difference in scale between IP and optical networks from a control perspective – optical networks being much larger with thousands of wavelengths to manage.
    • There is often a much bigger latency in setting up an LSP on an optical switch than there is on an IP router.
    • SDH and SONET systems can undertake a fast switch restoration in less than 50mS in case of failure – a GMPLS control plane needs to handle this effectively.

    Round-up

    GMPLS / ASTN is now well entrenched in the optical telecommunications industry with many, if not most, of the principle optical equipment manufacturers demonstrating compatible systems.

    It’s easy to see the motivation to create a common control plane (GMPLS was defined under the auspices of the IETF’s Common Control and Measurement Plane (ccamp) working group) as it would would considerably reduce the complexity and cost of managing fully converged Next Generation Networks (NGNs). Indeed, it is hard to see how any carrier could implement a real converged network without it.

    As discussed in Path Computation Element (PCE): IETF’s hidden jewel converged NGNs will need to compute service paths across multiple networks, across multiple domains and automatically pass service provision at the IP layer down to optical networks such as SDH and ASTN. Again, it is hard to see how this vision can be implemented without a common control plane and GMPLS.

    To quote the concluding comment in GMPLS: The Promise of the Next-Generation Optical: Control Plane (IEEE Communiction Magazine July 2005 Vol.43 No.7):

    “we note that far from being abandoned in a theoretical back alley, GMPLS is very much alive and well. Furthermore, GMPLS is experiencing massive interest from vendors and service providers where it is seen as the tool that will bring together disparate functions and networks to facilitate the construction of a unified high-function multilayer network operators will use as the foundation of their next-generation networks. Thus, while the emphasis has shifted away from the control of transparent optical networks over the last few years, the very generality of GMPLS and its applicability across a wide range of switching technologies has meant that GMPLS remains at the forefront of innovation within the Internet. “


    Chaos in Bangladesh’s ‘lllegal’ VoIP businesses

    April 11, 2007

    Chaos in Bangladesh’s Illegal VoIP business

    Take a listen to a report on BBC Radio Four’s PM programme broadcast on the 9th April which talks about the current chaos in Bangladesh brought about by the enforced closure of ‘illegal’ VoIP businesses. This is one of the impacts of the state of emergency imposed three months ago and has resulted in a complete breakdown of the Bangladeshi phone network.

    It seems that VoIP calls accounts for up to 80% of telephone traffic from abroad in the country driven by low call rates of between 1 and 2pence per minute.

    The new military backed government has been waging war on small VoIP businesses with the “illegality and corruptions of the the past being too long tolerated”. Many officials have been arrested, buildings pulled down and businesses closed.

    The practical result has thrown the telephone industry into chaos as hundreds of thousands of Bangladeshi’s living abroad try to call home home only to get the engaged tone.

    “In many countries VoIP is legal but in Bangladesh it has been long rumoured that high profile politicians have been operating the VoIP businesses and had an interest in keeping them outside of the law and unregulated to avoid taxes on the enormous revenues they generated.”

    The report says that the number of conventional phone lines is being doubled in April but to 30,000 lines but with a population of over 140 million people this is too few!

    You can listen to the report here Chaos in Bangladesh’s Illegal VoIP business Copyright BBC

    It really is amazing how disruptive a real disruptive technology can be, but when this happens it usually comes back to bite us!

    I talked about the Sim Box issue in Revector, detecting the dark side of VoIP, and the Bangladesh situation provides the reasoning about why incumbent carriers are often hell bent on stamping VoIP traffic out. In the western world, the situation is no different, but governments and carriers do not just bulldoze the businesses – maybe they should in some cases!

    Addendum #1: the-crime-of-voice-over-ip-telephony/


    Path Computation Element (PCE): IETF’s hidden jewel

    April 10, 2007

    In a previous post MPLS-TE and network traffic engineering, I talked about the challenges of communication network traffic engineering and capacity planning and their relation to MPLS-TE (or MPLSTE). Interestingly, I realised that I did not mention that all of the engineering planning, design and optimisation activities that form the core of network management usually take place off-line. What I mean by this, is that a team of engineers sit down either on an ad hoc basis driven by new network or customer acquisitions or as part of an annual planning cycle to produce an upgrade or migration plan that can be used to extend their existing network to meet the needs of the additional traffic. This work does not impact live networks until the OPEX and CAPEX plans have been agreed and signed off by management teams and then implemented. A significant proportion of data that drives this activity is obtained from product marketing and/or sales teams who are supposed to know how much additional business, hence additional traffic, will be imposed on the network in the time period coved by planning activities.

    This long-term method of planning network growth has been used since the dawn of time and the process should put in place the checks and balances (that were thrown to the wind in the late 1990s) to ensure that neither too much nor too little investment is made in network expansion.

    What is Path Computation Element (PCE)

    What is a path through the network? I’ve covered this extensively in my previous posts about MPLS’s ability to guide traffic through a complex network and force particular packet streams to follow a constraint-based and pre-determined path from network ingress to network egress. This deterministic path or tunnel enables the improved QoS management of real-time services such as Voice over IP or IPTV.

    Generally paths are calculated and managed off-line as part of the overall traffic engineering activity. When a new customer is signed up, their traffic requirements are determined and the most appropriate paths for the traffic superimposed on the current network topology that would best meet the customer’s needs and balance traffic distribution on the network. If new physical assets are required, then these would be provisioned and deployed as necessary.

    Traditional planning cycles are traditionally focussed on medium to long term needs and cannot really be applied to shorter planning needs. Such short term needs could derive from a number of requirements such as:

    • Changing network configurations dependent on the time of day, for example, there is usually a considerable difference traffic profiles between office hours, evening hours and night time. The possibility of dynamically moving traffic dependent on busy hours (Time being the new constraint) could provide significant cost benefits.
    • Dynamic or temporary path creation based on customers’ transitory needs.
    • Improved busy hour management through auto-rerouting of traffic.
    • Dynamic balancing of network load to reduce congestion.
    • Improved restoration when faults occur.

    To be able to undertake these tasks a carrier would need to move away from off-line Path Calculation to On-line Path Calculation and this is where IETF’s Path Computation Element (PCE) Working Group comes to the rescue.

    In essence, on-line PCE software acts very much along the same lines a graphics chip handles off-loaded calculations for the main CPU in a personal computer. For example, a service requires that a new path be generated through the network and that request, together with the constrained-path requirements for the path such as bandwidth, delay etc., is passed to the attached PCE computer. The PCE has a complete picture of flows and paths in the network at the precise moment derived from other Operational Support Software (OSS) programmes so it can calculate in real time the optimal path through the network that will deliver the requested path. This path is then used to automatically update router configurations and Traffic engineering database.

    In practice, the PCE architecture calls for each Autonomous System (AS) domain to have its own PCE and if a multi-domain path is required the affected PCEs will co-operate to calculate the required path with the requirement provided by a ‘master’ PCE. The standard supports any combination, number or location of PCEs.

    Why a separate PCE?

    There are a number of reasons why a separate PCE is being proposed:

    • Path Computation of any form is not an easy and simple task by any means. Even with appropriate software, computing all the primary, back-up and services paths on a complex network will strain computing techniques to the extreme. A number of companies that provide software capable of undertaking this task were provided in the above post.
    • The PCE will need undertake computationally intensive calculations so it is unlikely (to me) that a PCE capability would ever be embedded into a router or switch as they generally do not have the power to undertake path calculations in complex network.
    • If path calculations are to be undertaken in a real-time environment then, unlike off-line software which can take hours for an answer to pop out, a PCE would needs to provide an acceptable solution in just a few minutes or seconds.
    • Most MPLS routers calculate a path on the basis of a single constraint e.g. the shortest path. Calculating paths based on multiple constraints such as bandwidth, latency, cost or QoS significantly increases the computing power required to reach a solution.
    • Routers route and have limited or partial visibility of the complete network, domain and service mix and thus are not able to undertake the holistic calculations required in a modern converged network.
    • In a large network the Traffic engineering database (TED) can become very large creating a large computational overhead for a core router. Moving TED calculations to a dedicated PCE server could be beneficial in lowering path request response times.
    • In a traditional IP network there may be many legacy devices that do not have an appropriate control plane thus creating visibility ‘holes’.
    • A PCE could be used to provide alternative restorative routing of traffic in an emergency. As a PCE would have a holistic view of the network, restoration using a PCE could reduce potential knock-on effects of a reroute.

    The key aspect of multi-layer support

    One of the most interesting architecture aspects of the PCE is to address a very significant issue faced by all carriers today – multi-network support. All carriers utilise multiple layers to transport traffic – these could include IP-VPN, IP, Ethernet, TDM, MPLS, SDH and optical networks in several possible combinations. The issue is that a path computation at the highest level inevitably has a knock-on effect down the hierarchy to the physical optical layer. Today, each of these layers and protocols are generally managed, planned and optimised as separate entities so it would make sense that when a new path is calculated, its requirements are passed down the hierarchy so that knock-on effects can be better managed. The addition of a new small IP link could force the need to add an additional fibre.

    Clearly, providing flow though and visibility of new services to all layers and manage path computation on a multi-layer basis would be a real boon for network optimisation and cost reduction. However, let’s bear in mind that this represents a nirvana solution for planning engineers!

    A Multi-layer path

    The PCE specification is being defined to provide this across layer or multi-layer capability. Note that a PCE is not a solution aimed at use on the whole Internet – clearly this would be a step just too challenging along the lines of the whole Internet upgrading IPV-6!

    I will not plunge into the deep depths of the PCE architecture here, but a complete overview can be found in A Path Computation Element (PCE) Based Architecture (RFC 4655). At the highest level the PCE talks to a signalling engine that takes in requests for a new path calculation and passes any consequential requests to other PCEs that might be needed for an inter-domain path. The PCE also interacts with the Traffic Engineering Database to automatically update it if and as required (Picture source: this paper).

    Another interesting requirement document is Path Computation Element Communication Protocol (PCECP) Requirements .

    Round up

    It is very early days for the PCE project, but it would seem to provide one of the key elements required to enable carriers to effectively manage a fully converged Next Generation Network. However, I would imagine that the operational management in many carriers would be aghast at putting the control of even transient path computation on-line when considering the risk and the consequence to customer experience if it went wrong.

    Clearly PCE architecture has to be based on the use of powerful computing engines, software that can holistically monitor and calculate new paths in seconds and most importantly be a truly resilient network element. Phew!

    Note: One of the few commercial companies working on PCE software is Aria Networks who are based in the UK and whose CTO, Adrian Farrell, is also Chairman of the PCE Working Group. I do declare an interest as I undertook some work for Aria Networks in 2006.

    Addendum #1: GMPLS and common control

    Addendum #2: Aria Networks shows the optimal path

    Addendum #3: It was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ or function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?

    Addendum #4: New web site focusing on PCE: http://pathcomputationelement.com