I’ve spent many a happy hour waiting in Vodafone’s Newbury HQ reception, but I’ve never seen anything like this! I hope they are insured for floods!
I first came across the first across the embryonic idea behind The Cloud in 2001 when I first met its Founder, George Polk. In those days George was the ‘Entrepreneur in Residence’ at iGabriel, an early stage VC formed in the same year.
One of his first questions was “how can I make money from a Wi-Fi hotspot business?” I certainly didn’t claim that I knew at the time but sure as eggs is eggs I guess that George, his co-founder Niall Murphy and The Cloud team are world experts by now! George often talked about environmental issues but I was sorry to hear that he had stepped down from his CEO position (he’s still on the Board) to work on climate change issues.
The vision and business model behind The Cloud is based on the not unreasonable idea that we all now live in a connected world where we use multiple devices to access the Internet. We all know what these are: PCs, notebooks, mobile phones, PDAs and games consoles etc. etc. Moreover, we want to transparently use any transport bearer that is to hand to access the Internet, no matter where we are or what we are doing. This could be DSL in the home, a LAN in the office, GPRS on a mobile phone or a Wi-Fi hotspot.
The Cloud focuses on the creation and enablement of public Wi-Fi so that consumers and business people are able to connect to the Internet where ever they may be located when out and about.
One of the big issues with Wi-Fi hotspots back in the early years of the decade (and it still is but less so these days), was that Wi-Fi hotspot provision industry was highly fractured with virtually every public hotspot being managed by a different provider. When these providers wanted to monetise their activities it seemed that you needed to set up a different account at each site you visited. This cast a big shadow over users and slowed down market growth considerably.
What was needed in the market place was Wi-Fi aggregators or market consolidation that would allow a roaming user to seamlessly access the Internet from lots of different hotspots without having to having multiple accounts.
Meeting this need for always on connectivity is where The Cloud is focused and their aim is to enable wide-scale availability of public Wi-Fi access through four principle methods:
- Direct deployment of hot spots:(a) In coffee shops, airports public houses etc. in partnership with the owners of these assets.(b) In wide area locations such as city centre in partnership with local councils.
- Wi-Fi extensions of existing public fixed IP networks .
- Wi-Fi extension of existing private enterprise networks – “co-opting networks”
- Roaming relationships with other Wi-Fi operators and service providers, such as with iPass in 2006.
The Cloud’s vision is to stitch together all these assets and create a cohesive and ubiquitous Wi-Fi network to enable Internet access at any location using the most appropriate bearer available.
It’s The Cloud’s activities in 1(a) above that is getting much publicity at the moment as back in April the company announced coverage of the City of London in partnership with City of London Corporation. The map below shows the extent of the network.
Note: However, The Cloud will not have everything all to itself in London as a ‘free’ WiFi Thames based network has just been launched (July 2007) by Meshhopper.
On July 18th 2007 The Cloud announced coverage of Manchester city centre as per the map below:
These network roll-outs are very ambitious and are some largest deployments of wide-area Wi-Fi technology in the world so I was intrigued as to how this was achieved and what challenges were encountered during the roll out.
Last week I talked with Niall Murphy, The Cloud’s Co-Founder and Chief Strategy Officer, to catch up with what they were up to and to find out what he could tell me about the architecture of these big Wi-Fi networks.
One of my first questions in respect of the city-centre networks was about in-building coverage as even high power GSM telephony has issues with this and Wi-Fi nodes are limited to a maximum power of 100mW.
I think I already knew the answer to this, but I wanted to see what The Cloud’s policy was. As I expected, Niall explained that “this is a challenge” and consideration of this need was not part of the objective of the deployments which are focused on providing coverage in “open public spaces“. This has to be right in my opinion as the limitation in power would make this an unachievable objective in practice.
Interestingly, Niall talked about The Cloud’s involvement in OFCOM‘s investigation to evaluate whether there would be any additional commercial benefit by allowing transmit powers greater tha 100mW. However, The Cloud’s recommendation was not to increase power for two reasons:
- Higher power would create a higher level of interference over a wider area which would negate the benefits of additional power.
- Higher power would negatively impact battery life in devices.
In the end, if I remember correctly, the recommendation by OFCOM was to leave the power limits as they were.
I was interested in the architecture of the city-wide networks as I really did not know how they had gone about the challenge. I am pretty familiar with the concept of mesh networks as I tracked the path of one of the early pioneers in the UK of this technology, Radiant Networks. Unfortunately, Radiant went to the wall – Radiant Networks flogged – in 2004 for reasons I assume to be concerned with the use of highly complex, proprietary and expensive nodes (as shown on the left) and the use of the 26, 28 and 40Ghz bands which would severely impact economics due to small cell sizes.
Fortunately, Wi-Fi is nothing like those early proprietary approaches to mesh networks and the technology has come of age due to wide-scale global deployment. More importantly, this has also led to considerably lower equipment costs. The reason that this is that Wi-Fi uses the 2.4GHz ‘free band’ and most countries around the world have standardised on the use of this band giving Wi-Fi equipment manufacturers access to a truly global market.
Anyway getting back to The Cloud, Niall, said that “the aims behind the City of London network was to provide ubiquitous coverage in public spaces to a level of 95% which we have achieved in practice“.
The network uses 127 nodes which are located on street lights, video surveillance poles or other street furniture owned by their partner, the City of London Corporation. Are 127 nodes enough I ask? Niall’s answer was an emphatic “yes” although “the 150 metre cell radius and 100mW power limitation of Wi-Fi definitely provides a significant challenge“.
Interestingly, Niall observed that deploying a network in the UK was much harder than in the US due to the lower power levels of the 2.4Ghz band than in the USA. The Cloud’s experience has shown that a cell density two or three times greater is required in a UK city – comparing London to Philadelphia for example. This raises a lot of interesting questions about hotspot economics!
Much time was spent on hotspot planning and this was achieved in partnership with a Canadian company called Belair Networks. One of the interesting aspects of this activity was that there was “serious head scratching” by Belair as being a Canadian company they were used to nice neat square grids of streets and not the no-straight-line topology mess of London!
Data traffic from the 127 nodes that form The Cloud’s City of London network are back-hauled to seven 100Mbit/s fibre PoPs (Points of Presence) using 5.6GHz radio. Thus each node has two transceivers. The first is the Wi-Fi transceiver with a 2.4GHz antenna trained on the appropriate territory. The second is a 5.6GHz transceiver pointing to the next node where the traffic daisy chains back to the fibre PoP effectively creating a true mesh network (Incidentally, backhaul is one of the main uses of WiMax technology). I won’t talk about the strengths and weaknesses of mesh radio networks here but will write post on this subject at a future date.
According to Niall, the tricky part of the build was to find appropriate sites for the nodes. You might think this was purely due to radio propagation issues but there was also the issue that the physical assets they were using didn‘t always turn out to be where they appeared to be on the maps! “We ended up arriving at the street lamp indicated on the map and it was not there!” This is the same as many carriers who also do not know where some of their switches are located or do not know how many customer leased lines they have in place.
Another interesting anecdote was concerned with the expectations of journalists at the launch of the network. “Because we were talking about ubiquitous coverage, many thought they could jump in a cab and watch Joost streaming video as they weaved their way around the city“. Oh, it didn’t work then I say to Niall expecting him to say that they were disappointed.. “No” he said, “it absolutely worked!“
Niall says the network is up and running and working according to their expectations. “there is still a lot of tuning and optimisation to do but we are comfortable with the performance.“
Incidentally, The Cloud owns the network and works with the Corporation of London as the landlord.
The Cloud has seemingly really achieved a lot this year with the roll out of the city centre networks and the sign up of 6 to 7 thousand users in London alone. This was backed up by the launch of UltraWiFi, a flat rate service costing £11.99 pounds per month.
Incidentally, The Cloud do not see themselves in competition with cable companies or mobile operators concentrating as they do on providing pure Wi-Fi access to individuals on the move. Although in many ways it actually does.
They operate in the UK, Sweden, Denmark, Norway, Germany and The Netherlands. They‘re also working with a wide array of service providers, including O2, Vodafone, Telenor, BT, iPass, Vonage, Nintendo amongst others.
The big challenge ahead, as I’m sure they would acknowledge, is how they are going to ramp up revenues and take their business into the big time. I am confident that they are well able to accept this challenge and exceed it. All I know is that public Wi-Fi access is a crucial capability in this connected world and without it the Internet world will be a much less exciting and usable place.
To me, IPv6 is one of the Internet’s real enigmas as the supposed replacement of the the Internet’s ubiquitous IPv4. We all know this has not happened.
The Internet Protocol (IPv4) is the principle protocol that lies behind the Internet and it originated before the Internet itself. In the late 1960s there was a need in a number of US universities to exchange data and an interest in developing the new network technologies, switching capabilities and protocols required to achieve this.
The result of this was the formation of the Advanced Research Project Agency a US government body who started developing a private network called ARPANET which metamorphosed into the Defense Advanced Research Projects Agency (DARPA). The initial contract to develop the network was won by Bolt, Beranek and Newman (BBN) which was eventually bought by Verizon and sold to two private equity companies in 2004 to be renamed BBN Technologies.
The early services required by the university consortium were file transfer, email and the ability to remotely log onto university computers. The first version of the protocol was called the Network Control Protocol (NCP) and saw the light of day in 1971.
In 1973, Vince Cerf, who worked on NCP (now Chief Internet Evangelist at Google), and Robert Kahn ( who previously worked on the Interface Message Processor [IMP]) kicked off a program to design a next generation networking protocol for the ARPANET. This activity resulted in the the standardisation through ARPANET Requests For Comments (RFCs) of TCP/IPv4 in 1981 (now IETF RFC 760).
IPv4 uses a 32-bit address structure which we see most commonly written in dot-decimal notation such as aaa.bbb.ccc.ddd representing a total of 4,294,967,296 unique addresses. Not all of these are available for public use as many addresses are reserved.
An excellent book that pragmatically and engagingly goes through the origins of the Internet in much detail is Where Wizards Stay Up Late – it’s well worth a read.
The perceived need for upgrading
The whole aim of the development of of IPv4 was to provide a schema to enable global computing by ensuring that computers could uniquely identify themselves through a common addressing scheme and are able to communicate in a standardised way.
No matter how you look at it, IPv4 must be one of the most successful standardisation efforts to have ever taken place if measured by its success and ubiquity today. Just how many servers, routers, switches, computers, phones, and fridges are there that contain an IPv4 protocol stack? I’m not too sure, but it’s certainly a big, big number!
In the early 1990s, as the Internet really started ‘taking off’ outside of university networks, it was generally thought that the IPv4 specification was beginning to run out of steam and would not be able to cope with the scale of the Internet as the visionaries foresaw. Although there were a number of deficiencies, the prime mover for a replacement to IPv4 came from the view that the address space of 32 bits was too restrictive and would completely run out within a few years. This was foreseen because it was envisioned, probably not wrongly, that nearly every future electronic device would need its own unique IP address and if this came to fruition the addressing space of IPv4 would be woefully inadequate.
Thus the IPv6 standardisation project was born. IPv6 packaged together a number of IPv4 enhancements that would enable the IP protocol to be serviceable for the 21st century.
Work was started 1992/3 and by 1996 a number of RFCs were released starting with RFC 2460. One of the most important RFCs to be released was RFC 1933 which specifically looked at the transition mechanisms of converting IPv4 networks to IPv6. This covered the ability of routers to run IPv4 and IPv6 stacks concurrently – “dual stack” – and the pragmatic ability to tunnel the IPv6 protocol over ‘legacy’ IPv4 based networks such as the Internet.
To quote RFC 1933:
This document specifies IPv4 compatibility mechanisms that can be implemented by IPv6 hosts and routers. These mechanisms include providing complete implementations of both versions of the Internet Protocol (IPv4 and IPv6), and tunnelling IPv6 packets over IPv4 routing infrastructures. They are designed to allow IPv6 nodes to maintain complete compatibility with IPv4, which should greatly simplify the deployment of IPv6 in the Internet, and facilitate the eventual transition of the entire Internet to IPv6.
The IPv6 specification contained a number of areas of enhancement:
Address space: Back in the early 1990s there was a great deal of concern about the lack of availability of public IP addresses. With the widespread uptake of IP rather than ATM as the basis of enterprise private networks as discussed in a previous post The demise of ATM, most enterprises had gone ahead and implemented their networks with any old IP address they cared to use. This didn’t matter at the time because those networks were not connected to the public Internet so it did’nt matter whether other computers or routers had selected the same addresses.
It first became a serious problem when two divisions of a company tried to interconnect within their private network and found that both divisions had selected the same default IP addresses and could not connect. This was further compounded when those companies wanted to connect to the Internet and found that their privately selected IP addresses could not be used in the public space as they had been allocated to other companies.
The answer to this problem was to increase the IP protocol addressing space to accommodate all the private networks coming onto the public network. Combined with the vision that every electronic device could contain an IP stack, IPv6 increased the address space to 128 bits rather than IPv4′s 32 bits.
Headers: Headers in IPv4 (headers precede data in the packet flow and contain routing and other information about the data) were already becoming unwieldy so the addition of extra data in the headers necessitated by IPv6 would not help things by increasing a minimum 20byte header to 80 bytes. IPv6 headers are simplified by enabling headers to be chained together and only used when needed. IPv4 has a total of 10 fields, while IPv6 has only 6 and no options.
Configuration: Managing an IP network is pretty much of a manual exercise with few tools to automate the activity beyond tools such as DCHP (the automatic allocation of IP addresses for computers). Network administrators seem to spend most of the day manually entering IP addresses into fields in network management interfaces which really does not make much use of their skills.
IPv6 has incorporated enhancements to enable a ‘fully automatic’ mode where the protocol can assign an address to itself without human intervention. The IPv6 protocol will send out a request to enquire whether any other device has the same address. If it receives a positive reply it will add a random offset and ask again until it receives no rely. IPv6 can also identify nearby routers and automatically identify if a local DHCP server ID available.
Quality of Service: IPv6 has embedded enhancements to enable the prioritisation of certain classes of traffic by assigning a value to a packet in the field labelled Drop Priority.
Security: IPv6 incorporates IP-Sec to provide authentication and encryption to improve the security of packet transmission and is handled by the Encapsulating Security Payload (ESP).
Multicast: Multicast addresses are group addresses so that packets can be sent to a group rather than an individual. IPv4 handles this very inefficiently while IPv6 has implemented the concept of a multicast address into its core.
So why aren’t we all using IPv6?
The short answer to this question is that IPv4 is a victim of its own success. The task of migrating the Internet to IPv6, even taking into to account the available migration options of dual stack hosting and tunnelling, is just too challenging.
As we all know, the Internet is made up of thousands of independently managed networks each looking to commercially thrive or often just to survive. There is no body overseeing how the Internet is run except for specific technical aspects such as Domain Name Server (DNS) management or the standards body, IETF. (Picture credit: The logo of Linux IPv6 Development Project)
No matter how much individual evangelists push for the upgrade, getting the world to do so is pretty much an impossible task unless everyone sees that there is a distinct commercial and technical benefit for them to do so.
This is the core issue and as the benefits of upgrading to IPv6 have been seriously eroded by the advent of other standards efforts that address each of the IPv6 enhancements on a stand-alone basis. The two principle are NAT and MPLS.
Network address translation (NAT): To overcome the limitation in the number of available public addresses, NAT was implemented. This means that many users / computers in a private network are able to access the public Internet using a single public IP address. Each user is assigned a transient dynamic session IP address when they access the Internet and the NAT software manages the translation between the the public IP address and the dynamic address used within the private network.
NAT effectively addressed the concern that the Internet may run out of address space. It could be argued that NAT is just a short term solution that came at a big cost to users. The principle downside is that external connections are unable to set up long term relationships with an individual user or computer that is behind a NAT wall as they have not been assigned their own unique IP address. Users of the internal dynamically assigned IP addresses can change at any time.
This particularly affects applications that contain addresses so that traffic can always be sent to a specific individual or computer – VoIP is probably the main victim.
It’s interesting to note that the capability to uniquely identify individual computers was the main principle behind the development of IPv4 so it quite easy to see why there is often strong views expressed about NAT!
MPLS and related QoS standards: The advent of MPLS covered in The rise and maturity of MPLS and MPLS and the limitations of the Internet addressed many of the needs of the IP community to be able to address Quality of Service issues by separating high-priority service traffic from low-priority traffic.
Don’t break what works. IP networks take a considerable amount of skill and hard work to keep alive. They always seem to be ‘living on the edge’ and break down when a network administrator gets distracted. Leave well alone is the mantra by many operational groups.
The benefits of upgrading to IPv6 have been considerably eroded by the advent of NAT and MPLS. Combine this with the lack of an overall management body who could force through a universal upgrade and the innate inertia of carriers and ISPs probably means that IPv6 will never achieve such a dominant position as its progenitor IPv4.
According to one overview of IPv6, which gets to the heart of the subject, “Although IPv6 is taking its sweet time to conquer the world, it’s now showing up in more and more places, so you may actually run into it one of these days.”
This is not to say that IPv6 is dead, rather it is being marginalised by only being run in closed networks (albeit some rather large networks). There is real benefit to the Internet being upgraded to IPv6 as every individual and every device connected to it could be assigned its own unique address as envisioned by the Founders of the Internet. The inability to do this severely constrains services and applications which are not able to clearly identify an individual on an on-going basis as is inherent in a telephone number. This clearly reflects badly on the Internet.
IPv6 is a victim of the success of the Internet and the ubiquity of IPv4 and will probably never replace IPv4 in the Internet in the foreseeable future (Maybe I should never say never!). I was once asked by a Cisco Fellow how IPv6 could be rolled out, after shrugging my shoulders and laughing I suggested that it needed a Bill Gates of the Internet to force through the change. That suggestion did not go down too well. Funnily enough, now that IPv6 is incorporated into Vista we could see the day when this happens. The only fly in the ointment is that Vista has the same problems and challenges as IPv6 in replacing XP – users are finally tiring of never-ending upgrades with little practical benefit.
I’m really sorry to hear that the mobile software company the Tao Group has gone into administration according to The Register. The company has a history going back over ten years and I recently wrote about a speech made by their Chairman, Francis Charig -Mobile apps: Java just doesn’t cut the mustard?
I will try and find out more details.
A sad day for all of Tao’s employees I’m sure and a sad day for the UK software industry.
Session Initiation Protocol or ‘SIP’ as it is known has become a major signalling protocol in the IP world as it lies at the heart of Voice-over-IP (VoIP). It’s a term you can hardly miss as it is supported by every vender of phones on the planet (Picture credit: Avaya: An Avaya SIP phone).
Many open software groups have taken SIP to the heart of their initiatives and an example of this is IP Multimedia Subsystem (IMS) which I recently touched upon in IP Multimedia Subsystem or bust!
SIP is a real-time IP applications layer protocol that sits alongside HTTP, FTP, RTP and other well known protocols used to move data through the Internet. However it is an extremely important one because it enables SIP devices to discover, negotiate, connect and establish communication sessions with other SIP enabled devices.
SIP was co-authored in 1996 by Jonathan Rosenberg who is now a Cisco Fellow, Henning Schulzrinne who is Professor and Chair in the Dept. of Computer Science at Columbia University and Mark Handley who is Professor of Networked Systems at UCL. SIP became an IETF SIP Working Group which is still supporting the RFC 3261 standard. SIP was originally used on the US experimental Multicast network commonly known as Mbone. This makes SIP an IT /IP standard rather than one developed by the communications industry.
Prior to SIP, voice signalling protocols were essentially proprietary signalling protocols aimed at use by the big telecommunications companies on their big Public Switched Telecommunications Networks (PSTN) voice networks such as SS7 (C7 in the UK). With the advent of the Internet and the ‘invention’ of Voice over IP, it soon became clear that a new signalling protocol was required that was peer-to-peer, scalable, open, extensible, lightweight and simple in operation that could be used on a whole new generation of real-time communications devices and services that are running over the Internet.
SIP itself is based on earlier IETF / Internet standards, principally Hypertext Transport Protocol (HTTP) which is the core protocol behind the World Wide Web.
Key features of SIP
The SIP signalling standard has many key features:
Communications device identification: SIP supports a concept known as Address of Record (AOR) which represents a user’s unique address in the world of SIP communications. An example of an AOR is sip: email@example.com. To enable a user to have multiple communications devices or services, SIP has a mechanism called a Uniform resource Identifier (URI). A URI is like the Uniform Resource Locator (URL) used to identify servers on the world wide web. URIs can be used to specify the destination device of a real-time session e.g.
- IM: sip: firstname.lastname@example.org (Windows Messenger uses SIP)
- Phone: sip: 1234 1234 email@example.com; user=phone
- FAX: sip: 1234 1234 firstname.lastname@example.org; user=fax
A SIP URI can use both traditional PSTN numbering schemes AND alphabetic schemes as used on the Internet.
Focussed function: SIP only manages the set up and tear down of real time communication sessions, it does not manage the actual transport of media data. Other protocols undertake this task.
Presence support: SIP is used in a variety of applications but has found a strong home in applications such as VoIP and Instant Messaging (IM). What makes SIP interesting is that it is not only capable of setting up and tearing down real time communications sessions but also supports and tracks a user’s availability through the Presence capability. The open presence standard Jabber uses SIP. I wrote about presence in – The magic of ‘presence’.
Presence is supported through a key SIP extension: SIP for Instant messaging and Presence Leveraging Extensions (SIMPLE) [a really contrived acronym!]. This allows a user to state their status as seen in most of the common IM systems. AOL Instant Messenger is shown in the picture on the left.
SIMPLE means that the concept of Presence can be used transparently on other communications devices such as mobile phones, SIP phones, email clients and PBX systems.
User preference: SIP user preference functionality enables a user to control how a call is handled in accordance to their preferences. For example:
- Time of day: A user can take all calls during office hours but direct them to a voice mail box in the evenings.
- Buddy lists: Give priority to certain individuals according to a status associated with each contact in an address book.
- Multi-device management: Determine which device / service is used to respond to a call from particular individuals.
PSTN mapping: SIP can manage the translation or mapping of conventional PSTN numbers to SIP URIs and vice versa. This capability allows SIP sessions to transparently inter-work with the PSTN. There are organisations, such as ENUM, who provide appropriate database capabilities. To quote ENUM’s home page:
“ENUM unifies traditional telephony and next-generation IP networks, and provides a critical framework for mapping and processing diverse network addresses. It transforms the telephone number—the most basic and commonly-used communications address—into a universal identifier that can be used across many different devices and applications (voice, fax, mobile, email, text messaging, location-based services and the Internet).”
SIP trunking: SIP trunks enable enterprises to group inter-site calls using a pure IP network. This could use an IP-VPN over an MPLS-based network with a guaranteed Quality of Service. Using SIP trunks could lead to significant cost saving when compared to using traditional E1 or T1 leased lines.
Inter-island communications: In a recent post, Islands of communication or isolation? I wrote about the challenges of communication between islands of standards or users. The adoption of SIP-based services could enable a degree of integration with other companies to extend the reach of what, to date, have been internal services.
Of course, the partner companies need to have adopted SIP as well and have appropriate security measures in place. This is where the challenge would lay in achieving this level of open communications! (Picture credit: Zultys: a Wi-Fi SIP phone)
SIP servers are the centralised capability that manage establishment of communications sessions by users. Although there are many types of server, they are essentially only software processes and could be run on a single processor or device. There are several types of SIP server:
Registrar Server: The registrar server authenticates and registers users as soon as they come on-line. It stores identities and the list of devices in use by each user.
Location Server: The location server keeps track of users’ locations as they roam and provides this data to other SIP servers as required.
Redirect Server: When users are roaming, the Redirect Server maps session requests to a server closer to the user or an alternate device.
Proxy Server: SIP Proxy servers pass on SIP requests that are located either downstream or upstream.
Presence Server: SIP presence servers enable users to provide their status (presentities) to other users who would like to see it (Watchers).
Call setup Flow
The diagram below shows the initiation of a call from the PSTN network (section A), connection (section B) and disconnect (section C). The flow is quite easy to understand. One of the downsides is that if a complex session is being set up it’s quite easy to get up to 40 to 50+ separate transactions which could lead to unacceptable set-up times being experienced – especially if the SIP session is being negotiated across the best-effort Internet.
(Picture source: NMS Communications)
As a standard SIP has had a profound impact on our daily lives and lives well along those other protocol acronyms that have fallen into the daily vernacular such as IP, HTTP, www and TCP. Protocols that operate at the application level seem to be so much more relevant to our daily lives than those that are buried in the network such as MPLS and ATM.
There is still much to achieve by building capability on top of SIP such as federated services and more importantly interoperability. Bodies working on interoperability are SIPcenter, SIP Forum, SIPfoundry, SIP’it and IETF’s SPEERMINT working group. More fundamental areas under evaluation are authentication and billing.
More depth information about SIP can be found at http://www.tech-invite.com, a portal devoted to SIP and surrounding technologies.
Next time you just buy a SIP Wi-Fi phone from your local shop, install it, find that it works first time AND saves you money, just think about all the work that has gone into creating this software wonder. Sometimes, standards and open software hit a home run. SIP is just that.
Adendum #1:Do you know your ENUM?
I have never felt so uncomfortable about writing about a subject as I am now while contemplating IP Multimedia Subsystem (IMS). Why this should be I’m not quite sure.
Maybe it’s because one of the thoughts it triggers is the subject of Intelligent Networks (IN) that I wrote about many years ago – The Magic of Intelligent Networks. I wrote at the time:
“Looking at Intelligent Networks from an Information Technology (IT) perspective can simplify the understanding of IN concepts. Telecommunications standards bodies such as CCITT and ETSI have created a lot of acronyms which can sometimes obfuscate what in reality is straightforward.”
This was an initiative to bring computers and software to the world voice switches that would enable carriers to develop advanced consumer services on their voice switches and SS7 signalling networks. To quote an old article:
“Because IN systems can interface seamlessly between the worlds of information technology and telecommunications equipment, they open the door to a wide range of new, value added services which can be sold as add-ons to basic voice service. Many operators are already offering a wide range of IN-based services such as non-geographic numbers (for example, freephone services) and switch-based features like call barring, call forwarding, caller ID, and complex call re-routing that redirects calls to user-defined locations.”
Now there was absolutely nothing wrong with that vision and the core technology was relatively straightforward (database lookup number translation). The problem in my eyes was that it was presented as a grand take-over-the-world strategy and a be-all-and-and-all vision when in reality it was a relatively simple idea. I wouldn’t say IN died a death, it just fizzled out. It didn’t really disappear as such, as most of the IN related concepts became reality over time as computing and telephony started to merge. I would say it morphed into IP telephony.
Moreover, what lay at the heart of IN was the view that intelligence should be based in the network, not in applications or customer equipment. The argument about dumb networks versus Intelligent networks goes right back to the early 1990s and is still raging today – well at least simmering.
Put bluntly, carriers laudably want intelligence to be based in the network so they are able to provide, manage and control applications and derive revenue that will compensate for plummeting Plain Old Telephony Services (POTS) services. Whereas most IT and Internet people do not share this vision as they believe it holds back service innovation which generally comes from small companies. There is a certain amount of truth in this view as there are clear examples of where this is happening today if we look at the fixed and mobile industries.
Maybe I feel uncomfortable with the concept of IMS as it looks like the grandchild of IN. It certainly seems to suffer from the same strengths and weaknesses that affected its progenitor. Or, maybe it’s because I do not understand it well enough?
What is IP Multimedia Subsystem (IMS)?
IMS is an architectural framework or reference architecture - not a standard – that provides a common method for IP multiple media ( I prefer this term to multimedia) services to be delivered over existing terrestrial or wireless networks. In the IT world – and the communications world come to that – a good part of this activity could be encompassed using the term middleware. Middleware is an interface (abstraction) layer that sits between the networks and applications / services that provides a common Application Programming Interface (API).
The commercial justification of IMS is to enable the development of advanced multimedia applications whose revenue would compensate for dropping telephony revenues and the reduce customer churn.
The technical vision of IMS is about delivering seamless services where customers are able to access any type of service, from any device they want to use, with single sign-on, with common contacts and fluidity between wire line and wireless services. IMS has ambitions about delivering:
- Common user interfaces for any service
- Open application server architecture to enable a ‘rich’ service set
- Separate user data from services for cross service access
- Standardised session control
- Inherent service mobility
- Network independence
- Inter-working with legacy IN applications
One of the comments I came across on the Internet from a major telecomms equipment vendor was that IMS was about the “Need to create better end-user experience than free-riding Skype, Ebay, Vonage, etc.”. This, in my opinion, is an ambition too far as innovative services such as those mentioned generally do not come out of the carrier world.
Traditionally each application or service offered by carriers sit alone in their own silos calling on all the resources they need, using proprietary signalling protocols, and running in complete isolation to other services each of which sit in their own silo. In many ways this reflects the same situation that provided the motivation to develop a common control plane for data services called GMPLS. Vertical service silos will be replaced with horizontal service, control and transport layers.
Removal of service silos
Source: Business Communications Review, May 2006
As with GMPLS, most large equipment vendors are committed to IMS and supply IMS compliant products. As stated in the above article:
“Many vendors and carriers now tout IMS as the single most significant technology change of the decade… IMS promises to accelerate convergence in many dimensions (technical, business-model, vendor and access network) and make “anything over IP and IP over everything” a reality.
Maybe a more realistic view is that IMS is just an upgrade to the softswitch VoIP architecture outlined in the 90s – albeit being a trifle more complex. This is the view of Bob Bellman, in an article entitled From Softswitching To IMS: Are We There Yet? Many of the core elements of a softswitch architecture are to be found in the IMS architecture including the separation of the control and data planes.
VoIP SoftSwitch Architecture
Source: Business Communications Review, April 2006
Another associated reference architecture that is aligned with IMS and is being popularly pushed by software and equipment vendors in the enterprise world is Service Oriented Architecture (SOA) an architecture that focuses on services as the core design principle.
IMS has been developed by an industry consortium and originated in the mobile world in an attempt to define an infrastructure that could be used to standardise the delivery of new UMTS or 3G services. The original work was driven by 3GPP2 and TISPAN. Nowadays, just about every standards body seems to be involved including Open Mobile Alliance, ANSI, ITU, IETF, Parlay Group and Liberty Alliance – fourteen in total.
Like all new initiatives, IMS has developed its own mega-set of of T/F/FLAs (Three, four and five letter acronyms) which makes getting to grips with the architectural elements hard going without a glossary. I won’t go into this much here as there are much better Internet resources available: The reference architecture focuses on a three layer model:
#1 Applications layer:
The application layer contains Application Servers (AS) which host each individual service. Each AS communicated to the control plane using Session Initiation Protocol (SIP). Like GSM, an AS can interrogate a database of users to check authorisation. The database is called the Home Subscriber Server (HSS) or an HSS in a 3rd party network if the user is roaming 9In GSM this is called the Home Location Register (HLR).
(Source: Lucent Technologies)
The application layer also contains Media Servers for storing and playing announcements and other generic applications not delivered by individual ASs, such as media conversion.
Breakout Gateways provide routing information based on telephone number looks-ups for services accessing a PSTN. This is similar functionality to that was found in IN systems discussed earlier.
PSTN gateways are used to interface to PSTN networks and include signalling and media gateways.
#2 Control layer:
The control plane hosts the HSS which is the master database of user identities and the individual calls or service sessions currently being used by each user. There are several roles that a SIP call / session controller can undertake:
- P-CSCF (Proxy-CSCF) This provides similar functionality as a proxy server in an Intranet
- S-CSCF (Serving-CSCF) This is the core SIP server always located in the home node
- I-CSCF (Interrogating-CSCF) This is a SIP server located at a network’s edge and it’s address can be found in DNS servers by 3rd party SIP servers.
#3 Transport layer:
IMS encompasses any services that uses IP / MPLS as transport and pretty much all of the fixed and mobile access technologies including ADSL, cable modem DOCSIS, Ethernet, Wi-Fi, WIMAX and CDMA wireless. It has little choice in this matter as if IMS is to be used it needs to incorporate all of the currently deployed access technologies. Interestingly, as we saw in the DOCSIS post – The tale of DOCSIS and cable operators, IMS is also focusing on the of IPv6 with IPv4 ‘only’ being supported in the near term.
IMS represents a tremendous amount of work spread over six years and uses as many existing standards as possible such as SIP and Parlay. IMS is work in progress and much still needs to be done – security and seamless inter-working of services are but two.
All the major telecommunications software, middleware and integrators are involved and just thinking about the scale of the task needed to put in place common control for a whole raft of services makes me wonder about just how practical the implementation of IMS actually is? Don’t take me wrong, I am a real supporter of the these initiatives because it is hard to come up with an alternative vision that makes sense, but boy I’m glad that I’m not in charge of a carrier IMS project!
The upsides of using IMS in the long term are pretty clear and focus around lowering costs, quicker time to market, integration of services and, hopefully, single log-in.
It’s some of the downsides that particularly concern me:
- Non-migration of existing services: Like we saw in the early days of 3G, there are many services that would need to come under the umbrella of an IMS infrastructure such as instant conferencing, messaging, gaming, personal information management, presence, location based services, IP Centrex, voice self-service, IPTV, VoIP and many more. But, in reality, how do you commercially justify migrating existing services in the short term onto a brand new infrastructure – especially when that infrastructure is based on a non-completed reference architecture?
IMS is a long term project that will be redefined many times as technology changes over the years. It is clearly an architecture that represents a vision for the future that can be used to guide and converge new developments but it will many years before carriers are running seamless IMS based services – if they ever will.
- Single vendor lock-in: As with all complicated software systems, most IMS implementations will be dominated by a single equipment supplier or integrator. “Because vendors won’t cut up the IMS architecture the same way, multi-vendor solutions won’t happen, Moreover, that single supplier is likely to be an incumbent vendor.” This was quoted by Keith Nissen from InStat in a BCR article.
- No launch delays: No product manager would delay the launch of a new service on the promise of jam tomorrow. While the IMS architecture is incomplete, services will continue to be rolled out without IMS further inflaming the Non-migration of existing services issue raised above.
- Too ambitious: Is the vision of IMS just too ambitious? Integration of nearly every aspect of service delivery will be a challenge and a half for any carrier to undertake. It could be argued that while IT staff are internally focused getting IMS integration sorted they should be working on externally focused services. Without these services, customers will churn no matter how elegant a carrier’s internal architecture may be. Is IMS, Intelligent Networks reborn to suffer the same fate?
- OSS integration: Any IMS system will need to integrate with carrier’s often proprietary OSS systems. This compounds the challenge of implementing even a limited IMS trial.
- Source of innovation: It is often said that carriers are not the breeding ground of new, innovative services. This lies with small companies on the Internet creating Web 2.0 services that utilise such technologies as presence, VoIP and AJAX today. Will any of these companies care whether a carrier has an IMS infrastructure in place?
- Closed shops – another walled garden?: How easy will it be for external companies to come up with a good idea for a new service and be able to integrate with a particular carrier’s semi-proprietary IMS infrastructure?
- Money sink: Large integration projects like IMS often develop a life of their own once started and can often absorb vast amounts of money that could be better spent elsewhere.
I said at the beginning of the post that I felt uncomfortable about writing about IMS and now that I’m finished I am even more uncomfortable. I like the vision – how could I not? It’s just that I have to question how useful it will be at the end of the day and does it divert effort, money and limited resource away from where they should be applied – on creating interesting services and gaining market share. Only time will tell.
Addendum: In a previous post, I wrote about the IETF’s Path Computation Element Working Group and it was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?
Azea Networks, upgrading submarine cables.
For some reason I’ve always been intrigued by Azea Networks. I’m not sure why, maybe it’s because of the market area they are involved with – submarine cable upgrades – or maybe it’s because they have a way of doing this at lower cost than the jaw-dropping amount of money it takes to lay a new cable.
My interest was reawakened when I saw that Azea had recently attracted a $20m investment – Subsea network provider Azea secures $20M in more funding so I thought I would take the opportunity to write a post about them and catch up with Scott White, Azea’s Founder and CEO.
I think most of us have heard of submarine telecommunications cables as they are such a fundamental part of modern voice and data telecommunications. There are multiple cables straddling the world’s oceans and if you would like to see the extent of the network then you can buy a definitive map from TeleGeography which shows the locations of over 120 cables.
(Picture credit: C&W) The first cables were laid in the 19th century to interconnect telegraphic networks around the world following the invention of the telegraph in 1837. What followed was a ceaseless decade-by-decade addition of new cables and their replacement as new services demanded better technology and higher bandwidths. This continues to the current day.
Early cables were electrical in nature but with the advent of optical fibers, they soon made the transition to optical technology. Initially they could only support a single channel, but with the development of Wave Division Multiplexing (WDM) technologies – covered in Making SDH, DWDM and packet friendly and optical amplifiers, the capacities available on submarine cables literally exploded to multiple 10Gbit/s in the late 1990s. This was driven by the then prevalent forecasts that the requirement for bandwidth would soar into the stratosphere over the next decade. Around 12billion Dollars was spent by existing telecommunications companies and a raft of start-ups financed by the venture capital community and all were hoping to make a significant return on their investments. All this new money fed significant research and development that really did move the state of the art from WDM through to Dense DWM (DWDM).
As a graph from Azea shows above, capacities on single submarine cables increased from 5Gbit/s through to Nx10Gbit/s over a number of technology generations delivered over a period of a decade.
And then came the melt down…
When the telecommunication market crashes in 2001, deployment of new cables was severely curtailed as there was over-capacity in the market leading to plummeting wholesale prices. Many of the new market entrants sadly went into receivership, many investors were burnt and the industry shrank to virtually only keeping lights on at night.
The renewed need for capacity
There is a clear analogy here with the co-location industry as covered in Colo crisis for the UK Internet and IT industry? which at the same time crashed in a similar way. The irony in both cases is that a five year period of non-investment has led to a market situation where there is severe under-capacity. All the sky-high forecasts that drove the ‘bubble’ are now coming to fruition leading to my opinion that the downturn will be seen retrospectively as a blip in the continuing growth of global data traffic.
In the same way that under-capacity is driving new commercial opportunities in the colo market, it’s also leading to new opportunities in the submarine cable market. This is where Azea comes in to the picture.
Laying new submarine cables are phenomenally expensive projects, so expensive that they were usually financed and owned by a consortium of interested carriers. These days carriers are reluctant to invest in such expensive projects that are associated with such a long pay-back period. Interestingly, most telecommunications infrastructure investments were written off over 25-year periods but that has pretty much ceased over the last few years with investors and shareholders chasing returns over much shorter periods such as three to five years.
In the last few years, much investment in R&D new cable technologies has slowed, particularly in the area of 40Gbit/s+ technologies. Strong competition for the few new cable laying opportunities has led to low prices on 10Gbit/s technologies.
When existing capacity starts to run out on a particular submarine path there are several ways that this issue could be addressed.
- Leave alone: In other words, just ignore the problem and find a work around by shipping traffic on other paths. This could create long-term problems.
- Build a new system: Very expensive, long lead time to deployment and a long-term commitment. Not an investment profile liked by 21st century carrier Boards and shareholders.
- Lease capacity: If capacity is availably – and it’s a big ‘if’ – then it could be made availably quickly but it comes at high cost in these days of under supply.
- Upgrade capacity: This is a viable alternative that could provide capacity quite quickly at ‘modest’ cost.
It is in this last alternative that Azea networks plays by providing a methodology and a technology that can be used to hot-upgrade an existing cable without the need to take it off-line.
There is one caveat on this ability and that is only submarine cables that use optical amplifiers can be upgraded. Those using regenerators cannot. The reason for this is that a regenerator converts modulated light streams back to an electrical signals prior to regenerating them on another segment of cable thus channels and data rates are hard-embedded in the system and cannot be changed.
Dark fibre: As with terrestrial optical systems, lighting a dark fibre or unused fibre to add new capacity is a straightforward exercise with little impact on other lit fibres. Unfortunately, there are not too many dark fibres on submarine cables!
Overlay upgrade: An overlay upgrade can be used in parallel with existing services and works by inserting new wavelengths using an optical coupler that run together with existing WDM channels. Care is needed to avoid disrupting existing traffic. This approach does represent a compromise as it would not achieve the capacity that could be obtained by replacing WDM equipment in totality, but it does represent a cost effective solution.
Retrofit upgrade: A hybrid upgrade uses a mix of optical couplers and a replacement of equipment. This is more challenging as it could affect existing equipment warranties. This is most common for older generation systems where the existing terminal is obsolete and uses too much spectrum inefficiently.
When upgrading a live cable a high level of planning is required beforehand if disruption to existing services are to be avoided. Each cable incorporates a number of optical amplifiers as shown in the picture above. Each amplifier introduces noise which accumulates along the cable. Also each amplifier and fibre segment introduce distortion which again accumulates along the cable length. These effects limit the data rate that can be achieved on a particular cable and the number, if any, new channels that can be inserted.
The upgrade process consists of the following steps:
- Adding a coupler, checking existing traffic and measuring the existing performance.
- Connect new equipment
- Check existing traffic and new traffic.
In 2006 Azea announced that it has successfully completed Phase 1 of an upgrade to Segment I of the Southern Cross Cable Network (SCCN) and been selected for Phase 2 of the upgrade. Since then they have completed additional projects and are working on others as I write.
According to Scott White, Azea’s CEO, “Although there is only one customer in the public domain, we now have a deployment track record that covers all the upgrade scenarios across different generations of cable system technology and different upgrade methodologies. We are confident that on any given cable system we can offer the best upgrade solution – the most capacity, at the lowest price, deployed in the shortest time. In addition, because we are 100% focussed on upgrades we are often seen as being more independent than the larger submarine system suppliers who also push the more expensive option of building entirely new systems. However, we do not expect to win all upgrade opportunities as there is sometimes a strong allegiance to a particular vendor in some carriers.”
Scott confirmed my view that there has been a “huge pick up in industry optimism, especially in the submarine communications sector with a big uptake in both upgrades and new system builds”. Moreover, “as a small company, the recent investment really helps provide the financial credibility we require to deal with some of the world’s biggest carriers and we now have sufficient financial resources to see us through to profitability”.
Submarine cables are a key element in the infrastructure that lies behind the Internet and private Wide Area Networks (WANs) and it’s great to hear that the industry is recovering and beginning to deploy the bandwidth capacities we all need in our day to day use of networks!