WAP, GPRS, HSDPA on the move!

September 4, 2007

Over the last few months I have written many posts about Internet technologies but they have been pretty much focussed on terrestrial rather than wireless networks (other than dabbling in Wi-Fi with my overview of The Cloud. – The Cloud hotspotting the planet). This exercise was rather interesting as I needed to go back to the beginning and look at how the technologies evolved starting with The demise of ATM.

Back in 1994 a colleague of mine, Gavin Thomas, wrote about Mobile Data protocols and it’s interesting to glance back to see how ‘crude’ mobile data services were at the time. Of course, you would expect that to be the case as GSM Digital Cellular Radio was a pretty new concept at the time as well. In that 1993 post I ended with the statement that “GSM has a bright future”. Maybe it should have read ” the future is Orange”! No one foresaw in those days the up and coming explosive growth of GSM and mobile phone usage. Certainly n one predicted the surge in use of SMS.

Acronym hell has extended itself to mobile services over the last few years and the market has become littered with three, four and even five letter acronyms. In particular, wireless Internet started with a three letter acronym back in the late 1990s – WAP (Wireless Access Protocol), progressing through a four letter acronym, GPRS (General Packet Radio Service) and Enhanced Data GSM Environment (EDGE) and is now moving to a five letter broadband 3G acronym – HSDPA (High-Speed Downlink Packet Access). Phew!

The history of mobile data services has been littered with undelivered hype over the years that still lives on today. However, that hype led to the development of services that really do work unlike some of the early initiatives like WAP.

Ah, WAP, now that was interesting. I would probably put this at the top of my list of over-hyped protocols of all time. At least when ATM was hyped this only took place within the telecommunications community whereas WAP was hyped to the world’s consumers which created much more visibility of ‘egg on the face’ for mobile operators and manufacturers.

So what was WAP?

In the late 1990s the world was agog with the Internet which was accessed using personal computers via LANs or dial-up modems. There was clearly an opportunity (whether it was right or wrong) to bring the ‘Internet’ to the mobile or cell phone. I have put quotation marks around the Internet as the mobile industry has never seen the Internet in the same light as PC users – more on this later.

The WAP initiative was aimed at achieving this goal and at least it can be credited with a concept that lives on to this day - Mobile Internet. Data facilities on mobile phones were really quite crude at the time. Displays were monochrome with a very limited resolution. Moreover, the data rates that were achievable at the time over the air were really very low so this necessitated WAP content standards to take this into account.

There were several aspects that needed standardising under the WAP banner:

  • Transmission protocols. WAP defined how packets were handled on a 2G wireless network and consisted of wireless versions TCP and UDP as seen on the Internet and also used WTP (Wireless transaction protocol) to control communications between the mobile phone and the base station. WTP itself contained an error correction capability to better help cope with unreliable wire bearer.
  • Mobile HTML: It was immediately recognised that due to the limited screen size and the low data rates achievable on a mobile phone a very simplified version of HTML was required for use with mobile web sites. This led to the development of WML (Wireless Markup Language). This was a a VERY cut down version of HTML with very little capability and any graphic used being tiny as well. Towards the end of the 90s WAP 2.0 was defined which improved things somewhat and was based on a cut down of XHTML.

WAP clearly did not live up to its promise of a mobile version of the Internet with it’s crude and constrained user interface, high latency, the need to struggle with arcane menu structures (has anything changed here in ten years?) and to access services using exceedingly slow data rates experienced on the mobile networks of the day.

However, this did not stop mobile service operators from over hyping WAP services with endless hoarding and TV adverts extolling Internet access from mobiles. At one time it looked as if mobile operator advertising departments never talked to their engineering departments and were living in a world of their own that bore little relation to reality.

It all had to crash and it did along with the ‘Internet bubble’ in 2001. Many mobile operators sold their WAP service as an ‘open’ service similar to the Internet. In reality, they were closed garden services that forced users to visit their company portal as their first port of call making it well nigh impossible for small application developers to get their services in front of users. One could ask how much this has changed by 2007?

I should not forget to also mention that the cost of using WAP services was very high based as it was on bits transmitted. This led to shockingly high bills and low usage and provided one of the great motivators behind the ‘unforeseen’ growth of SMS services.

I believe that much of this still lives on in the conscious and unconscious memory of consumers and held back major usage of mobile data services for many years.

Along comes the ‘always-on’ GPRS service

After licking the WAP wounds for several years, it was clearly recognised that something better was required if data services were take off for mobile operators. One of the big issues for WAP were the poor data transmission speeds achieved so GPRS (General Packet Radio Service) was born.

GPRS is an IPv4-based packet switched based protocol where data users share the same data channel in a cell. Increased data rates in GPRS derives from the knitting together of multiple TDMA time slots where each individual GSM time slot can manage between 9.6 to 21.4 Kbps. Linking together slots can deliver greater than 40kbit/s ( up to 80kbit/s) depending on the configuration implemented.

GPRS users are connected all the time and have access to the maximum upstream bandwidth available if no other users in their cell are recieving data at the same time.

The improved data rate (that is in the range of an old dial-up modem) and improved reliability experienced when using GPRS has definitely led to a wider use of data services on the internet. Incidentally, a shared packet service should mean lowered cost but as users are still billed on a kilobits transmitted basis, GPRS bills are still shockingly high if the service is used a lot.

GPRS services are so reliable that there is wide spread availability of GPRS routers as shown in the picture above (Linksys) which are often used for LAN back up capabilities.

GPRS was definitely a step in the right direction.

Gaining an EDGE

EDGE (Enhanced Data rates for GSM Evolution) is an upgrade to GPRS that has gained some popularity in the USA and Europe and is known as a 2.5G service (although it is derives from 3G standards).

EDGE can be deployed by any carrier who offers GPRS services and represents an upgrade to GPRS by requiring a swap-out to an EDGE compatible transceiver and base station subsystem.

By using an 8PSK (8 phase shift keying) modulation scheme on each time slot it’s possible to increase data rates within a single time slot to 48kbit/s. Thus, in theory, it would be be possible, by combining all 8 times slots, to deliver an aggregate 384kbit/s data service. In practice this would not be possible as there would be no spare bandwidth available for voice services!

All in all EDGE achieves what it set out to achieve – higher data rates without an upgrade to full 3G capability and has been widely deployed.

The promise of the HSDA family

Following on from WAP, GPRS and EDGE have been the dominant protocols used for mobile data access for a number of years now. Achieved data rates are still slow by ADSL standards and this has put off many users after they have played with them for a bit.

With the tens of billions of $ spent on 3G licences at the end of the last century one would have imagined that we all would have access to megabit data rates on our mobile or cell phones by now, but that has just not been the case. 3G has been slow to be deployed and presented many operational issues that needed be resolved.

The Universal Mobile Telecommunications System (UMTS) known as 3GSM uses W-CDMA spread spectrum technology as its air interface and delivers its data services under the standards known as HSDPA (High-Speed Downlink Packet Access) and HSUPA (High-Speed Uplink Packet Access) known collectively as HSDA (High-Speed Data Access).

Unlike the TDMA technology used in GSM, W-CDMA is a spread spectrum technology where all users transmit ‘on the top’ of each other over a wide spectrum, in this case 5MHz radio channels. The equipment identifies individual users in the aggregate stream of data through the use of unique user codes] that can be detected. (I explained how spread spectrum radio works in 1992 in Spread Spectrum Radio). The use of this air interface adopted makes a 3G service incompatible with GSM.

In theory, W-CDMA is able to support data rates up to 14mbit/s but in reality offered rates are in the 384Kbit/s to 3.6Mbit/s and is delivered using a dedicated down link channel called the HS-DSCH, (High-Speed Downlink Shared Channel) which allows higher bit rate transmission than ordinary channels. Control functions are carried on sister channels. The HS-DSCH channel is shared between all users in a cell so in practice it would not be possible to deliver the ceiling data rate to any more than a single subscriber which makes me wonder how the industry is going to support lots of mobile TV users on a single cell? More on this issue in a future post.

Standardisation of HSDPA is carried out by the 3rd Generation Partnership Project (3GPP).

Inevitably, because of the ultra slow roll out of UMTS 3G networks, HSDPA will take a long time to get to your front door although this is happening is quite a few countries. Here in the UK, the 3 network is currently launching (August 2007) its HSDPA data service which will be followed by a HSUPA capability at a later date. Initially it will only offer HSDPA data cards for PCs.

Interestingly, The Register reports that 3 will offer 2.8Mbit/s and the the tariff will start at £10 Sterling a month for the Broadband Lite service providing 1Gbytes of data rising to £25 for 7Gbytes with the Broadband Max service.

You can pre-order a broadband modem now as shown on the right.

Incidentally, Vodafone’s UK HSDPA service can be found here and their 7.2Mbit/s service here.

The future is LTE

Another project within 3GPP is the Long Term Evolution (LTE) activity as a part of Release 8. The core focus of the LTE team is, as you would expect, on increasing available bandwidths but there are a number of other concerns they are working on.

  • Reduction of latency: Latency is not an issue for streamed services but is a prime concern for interactive services. There is no point post-WAP launching advanced interactive services if users have to wait around like in the early days of the Internet. Users have been there before.
  • Cost reduction: This is pretty self evident but the activity is focussed on reducing operator’s deployment costs not reducing consumer charge rates!
  • QoS capability: The ubiquitous need for policy and QoS capability and I’ve explored in depth on fixed networks.

The System Architecture Evolution (SAE) is another project that is running in parallel with but behind the LTE. It comes as little surprise that the SAE is looking at creating a flat all-IP network core which will (supposedly) be the key mechanism by which operators will reduce their operating costs. This still debatable to my mind.

Details of this new architecture can be found under the auspices of the Telecoms & Internet Services & Protocols for Advanced Neworks or TISPAN (a six letter acronym!) which is a joint activity between ETSI and 3GPP. To quote from the web site:

Building upon the work already done by 3GPP in creating the SIP-based IMS (IP Multimedia Subsystem), TISPAN and 3GPP are now working together to define a harmonized IMS-centric core for both wireless and wireline networks.

This harmonized ALL IP network has the potential to provide a completely new telecom business model for both fixed and mobile network operators. Access independent IMS will be a key enabler for fixed/mobile convergence, reducing network installation and maintenance costs, and allowing new services to be rapidly developed and deployed to satisfy new market demands.

Based as it is on IMS (which I wrote about in IP Multimedia Subsystem or bust!) this could turn out to be a project and a half. Saying that the “devil is in the detail” would seem to be a bit of an understatement when considering TISPAN.

A recent informative PowerPoint presentation about the benefits of NGN, convergence and TISPAN an be found here.

Roundup

We seem to have come a long way since the early days of WAP with HSDA now starting to deliver the speed of fixed line ADSL to the mobile world. Transfer rates are indeed important but high latency can be every bit as frustrating when using interactive services so it is important to focus on its reduction. The challenge with 3G is its limited coverage and this could cause slowness of uptake – as long as flat rate access charges are the norm and NOT per megabit charging as we have seen in the past. And boy, I bet the inter-operator roaming charges will be high!

However, bandwidth and service accessibility is not the only issue that needs addressing for the mobile Internet market to sky rocket. The platform itself is still a fundamental challenge, limited screen size and arcane menus to name but two. The challenge of writing of applications that are able to run on the majority of phones is definitely one the other major issues (I touched on this in Mobile apps: Java just doesn’t cut the mustard?).

I reviewed a book earlier this year entitled Mobile Web 2.0! that talks extensively about the walled-garden and protectionist attitudes still exhibited by many of the mobile operators. This has to change and there are definite signs that this is beginning to happen with fully open Internet access now being offered by the more enlightened operators.

Maybe, just maybe, if it all comes together over the next decade then the prediction in the above book “The mobile phone network is the computer. Of course, when we say ‘phone network’ we do not mean the ‘Mobile operator network. Rather we mean an open, Web driven application…” could just come about.


The curse of BPL

August 16, 2007

I am hesitant to put pen to paper to write about Broadband over Power Lines or BPL and Power Line Communications or PLC (maybe this should be Broadband over mains in the UK!) as I have no doubt that I am biased in my views and have been for a long time. This does not derive from in-depth experience of the technology but because I have been a radio amateur or ‘ham’ since my teenage years.

In the amateur radio world BPL is seen as a ogre that could have a major impact on their ability to continue their hobby due to interference from BPL trials or deployments. More on this later.

Today, the principle technology used to deliver broadband Internet access into homes is Asynchronous Digital Subscriber Line (ADSL) technology delivered by local telephone companies or ISPs collocating equipment in their switching centres. As ADSL is delivered over the ubiquitous copper cables previously used to deliver only traditional telephony services, it’s rollout has experienced tremendous growth over the last decade throughout the world.

However, ADSL does have some inherent commercial and technical limitations. For example, The further away you are from your local telephone exchange or central office the lower the bandwidth that can be delivered. This means that ADSL works best in high population areas such as towns and their suburbs. Even in the UK, there are still country areas where ADSL is not available because BT believes it is uneconomic or technically challenging to provide the service. For many years BT ran trials using wireless (that we would probably call WIMAX these days) to test the economics of providing Internet service to remote locations or caravan parks.

As ADSL can only be offered by telecommunications companies, whether they be old telephony providers or newer ISPs, this led to other utility providers wanting to get into the act. Water companies installed fibre optic cables when they dug trenches and canal and railway operating companies allowed telecommunications companies to run cables along their facilities.

We should also not forget our very own Energis (now Cable and Wireless) who started by providing wholesale backbone services by running cables along pylons. At one time nearly every electricity company had a telecommunications division.

This neatly brings back to Broadband over Power Line technology. The logic that drove the development of BPL is quite straightforward to understand. Every home is connected to an electricity distribution network so why should that not be used to deliver a broadband Internet service? This would mean that electricity companies could participate in the Internet revolution and create additional revenues to fill their coffers! Moreover, maybe BPL could be used to deliver broadband access to remote locations where ADSL cannot reach.

There is one thing about BPL that is clearly different from all the other technologies I have written about and this may seem a little strange. There are no IETF or IEEE technical standard for BPL although there are standards activities afoot. This makes deploying a BPL service a rather hit or miss affair.

Deployment is also challenging due to the fact there is tremendous variation in the electricity distribution networks throughout the world making standardisation a tad difficult. For example, in the UK hundreds if not thousands of homes are connected to a local substation where the high transmission voltages are converted to the normal 240 volt house supply. Hence it should be possible to ‘inject’ the broadband service in front of the transformer and deliver service to many houses at the same time which helps improve service economics.

In the USA the situation is quite different because of the distances involved. It is always more efficient to carry electricity at the highest voltage possible over long distances to reduce losses, so in the USA it is common practice to have the transformation to 110 volts done at the last possibly opportunity by placing an individual transformer on a pole outside of each home. This can wreck BPL service economics. However, this has not stopped many services trials taking place.

BPL technology

A BPL service can offer similar bandwidth capabilities to ADSL in that it supports an 256kbit/s up stream and up to 2.7M/bit/s down stream,. It achieves this by encoding data utilising the medium and shortwave spectrum of 1.6 to 30MHz or higher. In-house modems connect back to the head-end located at the substation where fibre or radio can be used to connect back to a central office as used in wide-area Wi-Fi services ( see The Cloud hotspotting the planet). The modulated radio frequency carrier is injected into the local electricity distribution network using an isolation capacitor and transmitter can have a power of 100s of watts.

BPL modems use several methods of modulation depending on the service bandwidth required:

  • GMSK (Gaussian minimum-shift keying) for bandwidths less than 1Mbit/s
  • CDMA (Code division multiple access) as used in mobile 3G services for greater than 1Mbit/s, and
  • OFDM (Orthogonal frequency-division multiplexing) for bandwidths up to 45Mbit/s

Most modern BPL deployments use ODFM as higher bandwidths are required if the service operators are to compete with their local telephone companies ADSL services.

There are several organisations involved in standardisation efforts:

Consumer Electronics Powerline Communication Alliance (CEPCA): A PowerPoint introduction to the activities of the CEPCA can be found here.

Their mission and purpose is the:

  • Development of specifications enabling the coexistence
    • Between in-home PLC Systems
    • Between Access PLC Systems and in-home PLC Systems
  • Promotion of high speed PLC technologies in order to achieve world-wide adoption thereof.

Power Line Communications Forum (plcforum): A similar body to CEPCA with many equipment suppliers as members.

HomePlug Powerline Alliance (HPPA): This group focuses on home networking using home electricity wiring as the distribution network – as they say, “power outlets are almost everywhere someone might want to use a networked device at home.”

IEEE P1901: According to their scope description the P1901 project will “develop a standard for high speed (>100 Mbps at the physical layer) communication devices via alternating current electric power lines, so called Broadband over Power Line (BPL) devices. The standard will use transmission frequencies below 100 MHz.”

Powernet: The main project objective of Powernet is to develop and validate a ‘plug and play’ Cognitive Broadband over Power Lines (CBPL) communications equipment. Power net is a European Commission project.

Side effects

With other postings about communications technologies I guess I would go on to say that although there is much work to be done, BPL is a complimentary technology to ADSL and it has its place in the Internet marketplace. My commercial reservations are quite strong however in that it is difficult to see how BPL can effectively compete with the now ubiquitous ADSL utilised by every local telephone company on the planet. Maybe there are niche markets where BPL could work and these would be geographical areas where ADSL cannot reach – yet.

However, as I indicated in my opening paragraph there are other concerns about BPL that are not encountered with any of the other ways of providing Internet service to homes whether they be delivered over wires such as ADSL or wireless such as Wi-Fi or WIMAX.

BPL has a dark side which I believe to be unacceptable and could prevent other legitimate users of the shortwave radio frequency spectrum to pursue their interests and hobbies without interference.

Interference is the issue which can be better understood by looking at the following video of a BPL service trial currently taking place in Australia.

BPL interference is causing problems in other countries as well, even the USA, where the Amateur Radio Relay League (ARRL) the body that represents all US radio amateurs has been forced into legal action in May 2007: ARRL Files Federal Appeals Court Brief in Petition for Review of BPL Rules

Also in May, the US Federal Communication Committee (FCC) has called for a BPL manufacturer to show that it complies with its experimental licence due to interference complaints – FCC Demands Ambient Demonstrate Compliance with BPL License Conditions

To quote the ARRL: The Commission’s obsessive compulsion to avoid any bad news about BPL has clearly driven its multi-year inaction,” the League continued. “Had this been any other experimental authorization dealing with any technology other than BPL, the experimental authorization would have been terminated long ago.”

Many amateurs see BPL as the biggest threat to their hobby that they have ever been seen.

So why should there be this level of interference from BPL?

It might be good to start answering this question by looking at ADSL as this does not have any major interference issues despite its deployment in many millions of homes. ADSL is delivered into peoples homes via the copper telephone line. This cable is not just a single copper cable as it might have been in the early 19th century but rather it is a twisted pair.

A twisted pair cable is like a rather crude coaxial cable. It is balanced in that the signal flows forward through one wire and returns through the other. This means that the bidirectional signals cancel each other out and the cable does not radiate the signal it is carrying to the outside world. Twisted pair cable are not as lossless as coaxial cables so there is a little loss but it is quite small for the length of cable usually used to connect a home to a telephone pole.

In general ADSL has been immune from creating interference because of the use of twisted pair cables. Imagine the consumer furore that would occur if there was was interference from ADSL to FM or TV services it does work.

It’s interesting to remember that cable companies also use broad band RF encoding but as services are delivered using high quality coaxial cables or fibre there is generally no interference (The tale of DOCSIS and cable operators).

On the other hand, electricity power lines that brings electrical power into houses are not shielded and are not twisted pair. They are standard three or four core cables that we are all familiar with when we connect our kettles to plugs although they are of a heavier gauge.

BPL transmissions are spread over the shortwave spectrum with a head-end power of possibly 100s of watts and the lossy distribution cables effectively act as an antenna or aerial so the wideband BPL signal radiates quite effectively over a wide area causing the not inconsiderable interference as seen in the video above.

Surely, the regulatory bodies such as OFCOM or the FCC would not allow a service that significantly interfered with other spectrum users to go ahead – would they? That is not so easy to answer today as it would have been a decade ago when anti-interference regulations were very strong. Nowadays, in this commercial world we live in, there is far more flexibility given if there is a potential commercial benefit. For example, even in the UK the old guard band (allocated unused spectrum between services to provide isolation) have been sold off for use in picocell GSM services as discussed in GSM pico-cell’s moment of fame .

The level of interference from a service such as BPL would not – could not – have been tolerated a few years ago when everyone used the shortwave bands for entertainment. But in this modern ‘digital age’ shortwave seems an anachronism and who really cares if it not usable…

At least two groups of individuals do and they are radio amateurs and short wave listeners. BPL vendors and service providers and have attempted to suppress their criticisms of BPL by what can only be described as a sticking plaster solution. This solution is to put filters on the BPL transmitter so that notches are inserted in the broadband spectrum to coincide with the amateur bands.

However the general consensus by amateurs who have been involved in notching trials is that they do indeed reduce interference but not by a sufficient amount for workable co-existence.

Another concern is that BPL is not just used for the provision of Internet access services but it is also possible to buy modems to provide in-house LAN capabilities in competition to Wi-Fi. This could be a another worrying source of interference to shortwave services. Bearing mind there is no filtering in a mains or power socket, the use of a BPL modem in one house will radiate in all homes connected to the same substation.

Roundup

I really am unable to see any real benefit in this technology when compared to cable operator DOCSYS or telephone ADSL delivered Internet services whose access infrastructure is designed for purpose. Just slapping a broadband transmitter on a local electricity distribution network is crude and is definitely NOT fit for purpose – even if filter notches are applied.

If the electricity industry redesigned their supply cables to be coaxial or twisted pair, which in practice is not really technically or commercially achievable, then the concept may work.

I doubt that BPL is viable in the long term and my view is that it’s use will fade with time. In the mean time if I am asked for a financial contribution to fight BPL, I reckon I would dig deep into my pockets.

One example of one of the up and coming trials is TasTel in Hobart, Australia, a partnership between Aurora Energy and AAPT who say they have a unique service. To quoute their web site:

Because BPL is brought to you by TasTel and eAurora, we can give you something nobody else can offer: fast Internet access and cheap broadband phone calls through a single service, on one bill which is sent to you electronically.

Where have I heard this before – time move away from Hobart?


Business plans are overrated

July 29, 2007

There more than element of obvious insight in Paul Kedrosky‘s recent post:

“Business plans are overrated. … Why?

… Because VCs are professional nit-pickers. Give them something to find fault with, and they’ll do it with abandon. I generally tell people to come to pitch meetings with less information rather than more. Sure, you’ll get pressed for more, but finesse it.

Presenting a full and detailed plan is, nine times out of ten, a path to a ‘No’ — or at least more time-consuming than having said less.”

Paul Kedrosky, in the wake of the VC financing of Twitter, which has no business plan, no business model and no profits.


The Cloud hotspotting the planet

July 25, 2007

I first came across the first across the embryonic idea behind The Cloud in 2001 when I first met its Founder, George Polk. In those days George was the ‘Entrepreneur in Residence’ at iGabriel, an early stage VC formed in the same year.

One of his first questions was “how can I make money from a Wi-Fi hotspot business?” I certainly didn’t claim that I knew at the time but sure as eggs is eggs I guess that George, his co-founder Niall Murphy and The Cloud team are world experts by now! George often talked about environmental issues but I was sorry to hear that he had stepped down from his CEO position (he’s still on the Board) to work on climate change issues.

The vision and business model behind The Cloud is based on the not unreasonable idea that we all now live in a connected world where we use multiple devices to access the Internet. We all know what these are: PCs, notebooks, mobile phones, PDAs and games consoles etc. etc. Moreover, we want to transparently use any transport bearer that is to hand to access the Internet, no matter where we are or what we are doing. This could be DSL in the home, a LAN in the office, GPRS on a mobile phone or a Wi-Fi hotspot.

The Cloud focuses on the creation and enablement of public Wi-Fi so that consumers and business people are able to connect to the Internet where ever they may be located when out and about.

One of the big issues with Wi-Fi hotspots back in the early years of the decade (and it still is but less so these days), was that Wi-Fi hotspot provision industry was highly fractured with virtually every public hotspot being managed by a different provider. When these providers wanted to monetise their activities it seemed that you needed to set up a different account at each site you visited. This cast a big shadow over users and slowed down market growth considerably.

What was needed in the market place was Wi-Fi aggregators or market consolidation that would allow a roaming user to seamlessly access the Internet from lots of different hotspots without having to having multiple accounts.

Meeting this need for always on connectivity is where The Cloud is focused and their aim is to enable wide-scale availability of public Wi-Fi access through four principle methods:

  1. Direct deployment of hot spots:(a) In coffee shops, airports public houses etc. in partnership with the owners of these assets.(b) In wide area locations such as city centre in partnership with local councils.
  2. Wi-Fi extensions of existing public fixed IP networks .
  3. Wi-Fi extension of existing private enterprise networks – “co-opting networks”
  4. Roaming relationships with other Wi-Fi operators and service providers, such as with iPass in 2006.

The Cloud’s vision is to stitch together all these assets and create a cohesive and ubiquitous Wi-Fi network to enable Internet access at any location using the most appropriate bearer available.

It’s The Cloud’s activities in 1(a) above that is getting much publicity at the moment as back in April the company announced coverage of the City of London in partnership with City of London Corporation. The map below shows the extent of the network.

Note: However, The Cloud will not have everything all to itself in London as a ‘free’ WiFi Thames based network has just been launched (July 2007) by Meshhopper.

On July 18th 2007 The Cloud announced coverage of Manchester city centre as per the map below:

These network roll-outs are very ambitious and are some largest deployments of wide-area Wi-Fi technology in the world so I was intrigued as to how this was achieved and what challenges were encountered during the roll out.

Last week I talked with Niall Murphy, The Cloud’s Co-Founder and Chief Strategy Officer, to catch up with what they were up to and to find out what he could tell me about the architecture of these big Wi-Fi networks.

One of my first questions in respect of the city-centre networks was about in-building coverage as even high power GSM telephony has issues with this and Wi-Fi nodes are limited to a maximum power of 100mW.

I think I already knew the answer to this, but I wanted to see what The Cloud’s policy was. As I expected, Niall explained that “this is a challenge” and consideration of this need was not part of the objective of the deployments which are focused on providing coverage in “open public spaces“. This has to be right in my opinion as the limitation in power would make this an unachievable objective in practice.

Interestingly, Niall talked about The Cloud’s involvement in OFCOM‘s investigation to evaluate whether there would be any additional commercial benefit by allowing transmit powers greater tha 100mW. However, The Cloud’s recommendation was not to increase power for two reasons:

  1. Higher power would create a higher level of interference over a wider area which would negate the benefits of additional power.
  2. Higher power would negatively impact battery life in devices.

In the end, if I remember correctly, the recommendation by OFCOM was to leave the power limits as they were.

I was interested in the architecture of the city-wide networks as I really did not know how they had gone about the challenge. I am pretty familiar with the concept of mesh networks as I tracked the path of one of the early pioneers in the UK of this technology, Radiant Networks. Unfortunately, Radiant went to the wallRadiant Networks flogged – in 2004 for reasons I assume to be concerned with the use of highly complex, proprietary and expensive nodes (as shown on the left) and the use of the 26, 28 and 40Ghz bands which would severely impact economics due to small cell sizes.

Fortunately, Wi-Fi is nothing like those early proprietary approaches to mesh networks and the technology has come of age due to wide-scale global deployment. More importantly, this has also led to considerably lower equipment costs. The reason that this is that Wi-Fi uses the 2.4GHz ‘free band’ and most countries around the world have standardised on the use of this band giving Wi-Fi equipment manufacturers access to a truly global market.

Anyway getting back to The Cloud, Niall, said that “the aims behind the City of London network was to provide ubiquitous coverage in public spaces to a level of 95% which we have achieved in practice“.

The network uses 127 nodes which are located on street lights, video surveillance poles or other street furniture owned by their partner, the City of London Corporation. Are 127 nodes enough I ask? Niall’s answer was an emphatic “yes” although “the 150 metre cell radius and 100mW power limitation of Wi-Fi definitely provides a significant challenge“.

Interestingly, Niall observed that deploying a network in the UK was much harder than in the US due to the lower power levels of the 2.4Ghz band than in the USA. The Cloud’s experience has shown that a cell density two or three times greater is required in a UK city – comparing London to Philadelphia for example. This raises a lot of interesting questions about hotspot economics!

Much time was spent on hotspot planning and this was achieved in partnership with a Canadian company called Belair Networks. One of the interesting aspects of this activity was that there was “serious head scratching” by Belair as being a Canadian company they were used to nice neat square grids of streets and not the no-straight-line topology mess of London!

Data traffic from the 127 nodes that form The Cloud’s City of London network are back-hauled to seven 100Mbit/s fibre PoPs (Points of Presence) using 5.6GHz radio. Thus each node has two transceivers. The first is the Wi-Fi transceiver with a 2.4GHz antenna trained on the appropriate territory. The second is a 5.6GHz transceiver pointing to the next node where the traffic daisy chains back to the fibre PoP effectively creating a true mesh network (Incidentally, backhaul is one of the main uses of WiMax technology). I won’t talk about the strengths and weaknesses of mesh radio networks here but will write post on this subject at a future date.

According to Niall, the tricky part of the build was to find appropriate sites for the nodes. You might think this was purely due to radio propagation issues but there was also the issue that the physical assets they were using didnt always turn out to be where they appeared to be on the maps! “We ended up arriving at the street lamp indicated on the map and it was not there!” This is the same as many carriers who also do not know where some of their switches are located or do not know how many customer leased lines they have in place.

Another interesting anecdote was concerned with the expectations of journalists at the launch of the network. “Because we were talking about ubiquitous coverage, many thought they could jump in a cab and watch Joost streaming video as they weaved their way around the city“. Oh, it didn’t work then I say to Niall expecting him to say that they were disappointed.. “No” he said, “it absolutely worked!

Niall says the network is up and running and working according to their expectations. “there is still a lot of tuning and optimisation to do but we are comfortable with the performance.

Incidentally, The Cloud owns the network and works with the Corporation of London as the landlord.

Round up

The Cloud has seemingly really achieved a lot this year with the roll out of the city centre networks and the sign up of 6 to 7 thousand users in London alone. This was backed up by the launch of UltraWiFi, a flat rate service costing £11.99 pounds per month.

Incidentally, The Cloud do not see themselves in competition with cable companies or mobile operators concentrating as they do on providing pure Wi-Fi access to individuals on the move. Although in many ways it actually does.

They operate in the UK, Sweden, Denmark, Norway, Germany and The Netherlands. Theyre also working with a wide array of service providers, including O2, Vodafone, Telenor, BT, iPass, Vonage, Nintendo amongst others.

The big challenge ahead, as I’m sure they would acknowledge, is how they are going to ramp up revenues and take their business into the big time. I am confident that they are well able to accept this challenge and exceed it. All I know is that public Wi-Fi access is a crucial capability in this connected world and without it the Internet world will be a much less exciting and usable place.


IPv6 to the rescue – eh?

June 21, 2007

To me, IPv6 is one of the Internet’s real enigmas as the supposed replacement of the the Internet’s ubiquitous IPv4. We all know this has not happened.

The Internet Protocol (IPv4) is the principle protocol that lies behind the Internet and it originated before the Internet itself. In the late 1960s there was a need in a number of US universities to exchange data and an interest in developing the new network technologies, switching capabilities and protocols required to achieve this.

The result of this was the formation of the Advanced Research Project Agency a US government body who started developing a private network called ARPANET which metamorphosed into the Defense Advanced Research Projects Agency (DARPA). The initial contract to develop the network was won by Bolt, Beranek and Newman (BBN) which was eventually bought by Verizon and sold to two private equity companies in 2004 to be renamed BBN Technologies.

The early services required by the university consortium were file transfer, email and the ability to remotely log onto university computers. The first version of the protocol was called the Network Control Protocol (NCP) and saw the light of day in 1971.

In 1973, Vince Cerf, who worked on NCP (now Chief Internet Evangelist at Google), and Robert Kahn ( who previously worked on the Interface Message Processor [IMP]) kicked off a program to design a next generation networking protocol for the ARPANET. This activity resulted in the the standardisation through ARPANET Requests For Comments (RFCs) of TCP/IPv4 in 1981 (now IETF RFC 760).

IPv4 uses a 32-bit address structure which we see most commonly written in dot-decimal notation such as aaa.bbb.ccc.ddd representing a total of 4,294,967,296 unique addresses. Not all of these are available for public use as many addresses are reserved.

An excellent book that pragmatically and engagingly goes through the origins of the Internet in much detail is Where Wizards Stay Up Late – it’s well worth a read.

The perceived need for upgrading

The whole aim of the development of of IPv4 was to provide a schema to enable global computing by ensuring that computers could uniquely identify themselves through a common addressing scheme and are able to communicate in a standardised way.

No matter how you look at it, IPv4 must be one of the most successful standardisation efforts to have ever taken place if measured by its success and ubiquity today. Just how many servers, routers, switches, computers, phones, and fridges are there that contain an IPv4 protocol stack? I’m not too sure, but it’s certainly a big, big number!

In the early 1990s, as the Internet really started ‘taking off’ outside of university networks, it was generally thought that the IPv4 specification was beginning to run out of steam and would not be able to cope with the scale of the Internet as the visionaries foresaw. Although there were a number of deficiencies, the prime mover for a replacement to IPv4 came from the view that the address space of 32 bits was too restrictive and would completely run out within a few years. This was foreseen because it was envisioned, probably not wrongly, that nearly every future electronic device would need its own unique IP address and if this came to fruition the addressing space of IPv4 would be woefully inadequate.

Thus the IPv6 standardisation project was born. IPv6 packaged together a number of IPv4 enhancements that would enable the IP protocol to be serviceable for the 21st century.

Work was started 1992/3 and by 1996 a number of RFCs were released starting with RFC 2460. One of the most important RFCs to be released was RFC 1933 which specifically looked at the transition mechanisms of converting IPv4 networks to IPv6. This covered the ability of routers to run IPv4 and IPv6 stacks concurrently – “dual stack” – and the pragmatic ability to tunnel the IPv6 protocol over ‘legacy’ IPv4 based networks such as the Internet.

To quote RFC 1933:

This document specifies IPv4 compatibility mechanisms that can be implemented by IPv6 hosts and routers. These mechanisms include providing complete implementations of both versions of the Internet Protocol (IPv4 and IPv6), and tunnelling IPv6 packets over IPv4 routing infrastructures. They are designed to allow IPv6 nodes to maintain complete compatibility with IPv4, which should greatly simplify the deployment of IPv6 in the Internet, and facilitate the eventual transition of the entire Internet to IPv6.

The IPv6 specification contained a number of areas of enhancement:

Address space: Back in the early 1990s there was a great deal of concern about the lack of availability of public IP addresses. With the widespread uptake of IP rather than ATM as the basis of enterprise private networks as discussed in a previous post The demise of ATM, most enterprises had gone ahead and implemented their networks with any old IP address they cared to use. This didn’t matter at the time because those networks were not connected to the public Internet so it did’nt matter whether other computers or routers had selected the same addresses.

It first became a serious problem when two divisions of a company tried to interconnect within their private network and found that both divisions had selected the same default IP addresses and could not connect. This was further compounded when those companies wanted to connect to the Internet and found that their privately selected IP addresses could not be used in the public space as they had been allocated to other companies.

The answer to this problem was to increase the IP protocol addressing space to accommodate all the private networks coming onto the public network. Combined with the vision that every electronic device could contain an IP stack, IPv6 increased the address space to 128 bits rather than IPv4’s 32 bits.

Headers: Headers in IPv4 (headers precede data in the packet flow and contain routing and other information about the data) were already becoming unwieldy so the addition of extra data in the headers necessitated by IPv6 would not help things by increasing a minimum 20byte header to 80 bytes. IPv6 headers are simplified by enabling headers to be chained together and only used when needed. IPv4 has a total of 10 fields, while IPv6 has only 6 and no options.

Configuration: Managing an IP network is pretty much of a manual exercise with few tools to automate the activity beyond tools such as DCHP (the automatic allocation of IP addresses for computers). Network administrators seem to spend most of the day manually entering IP addresses into fields in network management interfaces which really does not make much use of their skills.

IPv6 has incorporated enhancements to enable a ‘fully automatic’ mode where the protocol can assign an address to itself without human intervention. The IPv6 protocol will send out a request to enquire whether any other device has the same address. If it receives a positive reply it will add a random offset and ask again until it receives no rely. IPv6 can also identify nearby routers and automatically identify if a local DHCP server ID available.

Quality of Service: IPv6 has embedded enhancements to enable the prioritisation of certain classes of traffic by assigning a value to a packet in the field labelled Drop Priority.

Security: IPv6 incorporates IP-Sec to provide authentication and encryption to improve the security of packet transmission and is handled by the Encapsulating Security Payload (ESP).

Multicast: Multicast addresses are group addresses so that packets can be sent to a group rather than an individual. IPv4 handles this very inefficiently while IPv6 has implemented the concept of a multicast address into its core.

So why aren’t we all using IPv6?

The short answer to this question is that IPv4 is a victim of its own success. The task of migrating the Internet to IPv6, even taking into to account the available migration options of dual stack hosting and tunnelling, is just too challenging.

As we all know, the Internet is made up of thousands of independently managed networks each looking to commercially thrive or often just to survive. There is no body overseeing how the Internet is run except for specific technical aspects such as Domain Name Server (DNS) management or the standards body, IETF. (Picture credit: The logo of Linux IPv6 Development Project)

No matter how much individual evangelists push for the upgrade, getting the world to do so is pretty much an impossible task unless everyone sees that there is a distinct commercial and technical benefit for them to do so.

This is the core issue and as the benefits of upgrading to IPv6 have been seriously eroded by the advent of other standards efforts that address each of the IPv6 enhancements on a stand-alone basis. The two principle are NAT and MPLS.

Network address translation (NAT): To overcome the limitation in the number of available public addresses, NAT was implemented. This means that many users / computers in a private network are able to access the public Internet using a single public IP address. Each user is assigned a transient dynamic session IP address when they access the Internet and the NAT software manages the translation between the the public IP address and the dynamic address used within the private network.

NAT effectively addressed the concern that the Internet may run out of address space. It could be argued that NAT is just a short term solution that came at a big cost to users. The principle downside is that external connections are unable to set up long term relationships with an individual user or computer that is behind a NAT wall as they have not been assigned their own unique IP address. Users of the internal dynamically assigned IP addresses can change at any time.

This particularly affects applications that contain addresses so that traffic can always be sent to a specific individual or computer – VoIP is probably the main victim.

It’s interesting to note that the capability to uniquely identify individual computers was the main principle behind the development of IPv4 so it quite easy to see why there is often strong views expressed about NAT!

MPLS and related QoS standards: The advent of MPLS covered in The rise and maturity of MPLS and MPLS and the limitations of the Internet addressed many of the needs of the IP community to be able to address Quality of Service issues by separating high-priority service traffic from low-priority traffic.

Round up

Don’t break what works. IP networks take a considerable amount of skill and hard work to keep alive. They always seem to be ‘living on the edge’ and break down when a network administrator gets distracted. Leave well alone is the mantra by many operational groups.

The benefits of upgrading to IPv6 have been considerably eroded by the advent of NAT and MPLS. Combine this with the lack of an overall management body who could force through a universal upgrade and the innate inertia of carriers and ISPs probably means that IPv6 will never achieve such a dominant position as its progenitor IPv4.

According to one overview of IPv6, which gets to the heart of the subject, “Although IPv6 is taking its sweet time to conquer the world, it’s now showing up in more and more places, so you may actually run into it one of these days.”

This is not to say that IPv6 is dead, rather it is being marginalised by only being run in closed networks (albeit some rather large networks). There is real benefit to the Internet being upgraded to IPv6 as every individual and every device connected to it could be assigned its own unique address as envisioned by the Founders of the Internet. The inability to do this severely constrains services and applications which are not able to clearly identify an individual on an on-going basis as is inherent in a telephone number. This clearly reflects badly on the Internet.

IPv6 is a victim of the success of the Internet and the ubiquity of IPv4 and will probably never replace IPv4 in the Internet in the foreseeable future (Maybe I should never say never!). I was once asked by a Cisco Fellow how IPv6 could be rolled out, after shrugging my shoulders and laughing I suggested that it needed a Bill Gates of the Internet to force through the change. That suggestion did not go down too well. Funnily enough, now that IPv6 is incorporated into Vista we could see the day when this happens. The only fly in the ointment is that Vista has the same problems and challenges as IPv6 in replacing XP – users are finally tiring of never-ending upgrades with little practical benefit.

Interesting times.


sip, Sip, SIP – Gulp!

May 22, 2007

Session Initiation Protocol or ‘SIP’ as it is known has become a major signalling protocol in the IP world as it lies at the heart of Voice-over-IP (VoIP). It’s a term you can hardly miss as it is supported by every vender of phones on the planet (Picture credit: Avaya: An Avaya SIP phone).

Many open software groups have taken SIP to the heart of their initiatives and an example of this is IP Multimedia Subsystem (IMS) which I recently touched upon in IP Multimedia Subsystem or bust!

SIP is a real-time IP applications layer protocol that sits alongside HTTP, FTP, RTP and other well known protocols used to move data through the Internet. However it is an extremely important one because it enables SIP devices to discover, negotiate, connect and establish communication sessions with other SIP enabled devices.

SIP was co-authored in 1996 by Jonathan Rosenberg who is now a Cisco Fellow, Henning Schulzrinne who is Professor and Chair in the Dept. of Computer Science at Columbia University and Mark Handley who is Professor of Networked Systems at UCL. SIP became an IETF SIP Working Group which is still supporting the RFC 3261 standard. SIP was originally used on the US experimental Multicast network commonly known as Mbone. This makes SIP an IT /IP standard rather than one developed by the communications industry.

Prior to SIP, voice signalling protocols were essentially proprietary signalling protocols aimed at use by the big telecommunications companies on their big Public Switched Telecommunications Networks (PSTN) voice networks such as SS7 (C7 in the UK). With the advent of the Internet and the ‘invention’ of Voice over IP, it soon became clear that a new signalling protocol was required that was peer-to-peer, scalable, open, extensible, lightweight and simple in operation that could be used on a whole new generation of real-time communications devices and services that are running over the Internet.

SIP itself is based on earlier IETF / Internet standards, principally Hypertext Transport Protocol (HTTP) which is the core protocol behind the World Wide Web.

Key features of SIP

The SIP signalling standard has many key features:

Communications device identification: SIP supports a concept known as Address of Record (AOR) which represents a user’s unique address in the world of SIP communications. An example of an AOR is sip: xxx@yyy.com. To enable a user to have multiple communications devices or services, SIP has a mechanism called a Uniform resource Identifier (URI). A URI is like the Uniform Resource Locator (URL) used to identify servers on the world wide web. URIs can be used to specify the destination device of a real-time session e.g.

  • IM: sip: xxx@yyy.com (Windows Messenger uses SIP)
  • Phone: sip: 1234 1234 1234@yyy.com; user=phone
  • FAX: sip: 1234 1234 1235@yyy.com; user=fax

A SIP URI can use both traditional PSTN numbering schemes AND alphabetic schemes as used on the Internet.

Focussed function: SIP only manages the set up and tear down of real time communication sessions, it does not manage the actual transport of media data. Other protocols undertake this task.

Presence support: SIP is used in a variety of applications but has found a strong home in applications such as VoIP and Instant Messaging (IM). What makes SIP interesting is that it is not only capable of setting up and tearing down real time communications sessions but also supports and tracks a user’s availability through the Presence capability. The open presence standard Jabber uses SIP. I wrote about presence in – The magic of ‘presence’.

Presence is supported through a key SIP extension: SIP for Instant messaging and Presence Leveraging Extensions (SIMPLE) [a really contrived acronym!]. This allows a user to state their status as seen in most of the common IM systems. AOL Instant Messenger is shown in the picture on the left.

SIMPLE means that the concept of Presence can be used transparently on other communications devices such as mobile phones, SIP phones, email clients and PBX systems.

User preference: SIP user preference functionality enables a user to control how a call is handled in accordance to their preferences. For example:

  • Time of day: A user can take all calls during office hours but direct them to a voice mail box in the evenings.
  • Buddy lists: Give priority to certain individuals according to a status associated with each contact in an address book.
  • Multi-device management: Determine which device / service is used to respond to a call from particular individuals.

PSTN mapping: SIP can manage the translation or mapping of conventional PSTN numbers to SIP URIs and vice versa. This capability allows SIP sessions to transparently inter-work with the PSTN. There are organisations, such as ENUM, who provide appropriate database capabilities. To quote ENUM’s home page:

“ENUM unifies traditional telephony and next-generation IP networks, and provides a critical framework for mapping and processing diverse network addresses. It transforms the telephone number—the most basic and commonly-used communications address—into a universal identifier that can be used across many different devices and applications (voice, fax, mobile, email, text messaging, location-based services and the Internet).”

SIP trunking: SIP trunks enable enterprises to group inter-site calls using a pure IP network. This could use an IP-VPN over an MPLS-based network with a guaranteed Quality of Service. Using SIP trunks could lead to significant cost saving when compared to using traditional E1 or T1 leased lines.

Inter-island communications: In a recent post, Islands of communication or isolation? I wrote about the challenges of communication between islands of standards or users. The adoption of SIP-based services could enable a degree of integration with other companies to extend the reach of what, to date, have been internal services.

Of course, the partner companies need to have adopted SIP as well and have appropriate security measures in place. This is where the challenge would lay in achieving this level of open communications! (Picture credit: Zultys: a Wi-Fi SIP phone)

SIP servers

SIP servers are the centralised capability that manage establishment of communications sessions by users. Although there are many types of server, they are essentially only software processes and could be run on a single processor or device. There are several types of SIP server:

Registrar Server: The registrar server authenticates and registers users as soon as they come on-line. It stores identities and the list of devices in use by each user.

Location Server: The location server keeps track of users’ locations as they roam and provides this data to other SIP servers as required.

Redirect Server: When users are roaming, the Redirect Server maps session requests to a server closer to the user or an alternate device.

Proxy Server: SIP Proxy servers pass on SIP requests that are located either downstream or upstream.

Presence Server: SIP presence servers enable users to provide their status (presentities) to other users who would like to see it (Watchers).

Call setup Flow

The diagram below shows the initiation of a call from the PSTN network (section A), connection (section B) and disconnect (section C). The flow is quite easy to understand. One of the downsides is that if a complex session is being set up it’s quite easy to get up to 40 to 50+ separate transactions which could lead to unacceptable set-up times being experienced – especially if the SIP session is being negotiated across the best-effort Internet.

(Picture source: NMS Communications)

Round-up

As a standard SIP has had a profound impact on our daily lives and lives well along those other protocol acronyms that have fallen into the daily vernacular such as IP, HTTP, www and TCP. Protocols that operate at the application level seem to be so much more relevant to our daily lives than those that are buried in the network such as MPLS and ATM.

There is still much to achieve by building capability on top of SIP such as federated services and more importantly interoperability. Bodies working on interoperability are SIPcenter, SIP Forum, SIPfoundry, SIP’it and IETF’s SPEERMINT working group. More fundamental areas under evaluation are authentication and billing.

More depth information about SIP can be found at http://www.tech-invite.com, a portal devoted to SIP and surrounding technologies.

Next time you just buy a SIP Wi-Fi phone from your local shop, install it, find that it works first time AND saves you money, just think about all the work that has gone into creating this software wonder. Sometimes, standards and open software hit a home run. SIP is just that.

Adendum #1:Do you know your ENUM?


IP Multimedia Subsystem or bust!

May 10, 2007

I have never felt so uncomfortable about writing about a subject as I am now while contemplating IP Multimedia Subsystem (IMS). Why this should be I’m not quite sure.

Maybe it’s because one of the thoughts it triggers is the subject of Intelligent Networks (IN) that I wrote about many years ago – The Magic of Intelligent Networks. I wrote at the time:

“Looking at Intelligent Networks from an Information Technology (IT) perspective can simplify the understanding of IN concepts. Telecommunications standards bodies such as CCITT and ETSI have created a lot of acronyms which can sometimes obfuscate what in reality is straightforward.”

This was an initiative to bring computers and software to the world voice switches that would enable carriers to develop advanced consumer services on their voice switches and SS7 signalling networks. To quote an old article:

“Because IN systems can interface seamlessly between the worlds of information technology and telecommunications equipment, they open the door to a wide range of new, value added services which can be sold as add-ons to basic voice service. Many operators are already offering a wide range of IN-based services such as non-geographic numbers (for example, freephone services) and switch-based features like call barring, call forwarding, caller ID, and complex call re-routing that redirects calls to user-defined locations.”

Now there was absolutely nothing wrong with that vision and the core technology was relatively straightforward (database lookup number translation). The problem in my eyes was that it was presented as a grand take-over-the-world strategy and a be-all-and-and-all vision when in reality it was a relatively simple idea. I wouldn’t say IN died a death, it just fizzled out. It didn’t really disappear as such, as most of the IN related concepts became reality over time as computing and telephony started to merge. I would say it morphed into IP telephony.

Moreover, what lay at the heart of IN was the view that intelligence should be based in the network, not in applications or customer equipment. The argument about dumb networks versus Intelligent networks goes right back to the early 1990s and is still raging today – well at least simmering.

Put bluntly, carriers laudably want intelligence to be based in the network so they are able to provide, manage and control applications and derive revenue that will compensate for plummeting Plain Old Telephony Services (POTS) services. Whereas most IT and Internet people do not share this vision as they believe it holds back service innovation which generally comes from small companies. There is a certain amount of truth in this view as there are clear examples of where this is happening today if we look at the fixed and mobile industries.

Maybe I feel uncomfortable with the concept of IMS as it looks like the grandchild of IN. It certainly seems to suffer from the same strengths and weaknesses that affected its progenitor. Or, maybe it’s because I do not understand it well enough?

What is IP Multimedia Subsystem (IMS)?

IMS is an architectural framework or reference architecture - not a standard – that provides a common method for IP multiple media ( I prefer this term to multimedia) services to be delivered over existing terrestrial or wireless networks. In the IT world – and the communications world come to that – a good part of this activity could be encompassed using the term middleware. Middleware is an interface (abstraction) layer that sits between the networks and applications / services that provides a common Application Programming Interface (API).

The commercial justification of IMS is to enable the development of advanced multimedia applications whose revenue would compensate for dropping telephony revenues and the reduce customer churn.

The technical vision of IMS is about delivering seamless services where customers are able to access any type of service, from any device they want to use, with single sign-on, with common contacts and fluidity between wire line and wireless services. IMS has ambitions about delivering:

  • Common user interfaces for any service
  • Open application server architecture to enable a ‘rich’ service set
  • Separate user data from services for cross service access
  • Standardised session control
  • Inherent service mobility
  • Network independence
  • Inter-working with legacy IN applications

One of the comments I came across on the Internet from a major telecomms equipment vendor was that IMS was about the “Need to create better end-user experience than free-riding Skype, Ebay, Vonage, etc.”. This, in my opinion, is an ambition too far as innovative services such as those mentioned generally do not come out of the carrier world.

Traditionally each application or service offered by carriers sit alone in their own silos calling on all the resources they need, using proprietary signalling protocols, and running in complete isolation to other services each of which sit in their own silo. In many ways this reflects the same situation that provided the motivation to develop a common control plane for data services called GMPLS. Vertical service silos will be replaced with horizontal service, control and transport layers.


Removal of service silos
Source: Business Communications Review, May 2006

As with GMPLS, most large equipment vendors are committed to IMS and supply IMS compliant products. As stated in the above article:

“Many vendors and carriers now tout IMS as the single most significant technology change of the decade… IMS promises to accelerate convergence in many dimensions (technical, business-model, vendor and access network) and make “anything over IP and IP over everything” a reality.

Maybe a more realistic view is that IMS is just an upgrade to the softswitch VoIP architecture outlined in the 90s – albeit being a trifle more complex. This is the view of Bob Bellman, in an article entitled From Softswitching To IMS: Are We There Yet? Many of the  core elements of a softswitch architecture are to be found in the IMS architecture including the separation of the control and data planes.

VoIP SoftSwitch Architecture
Source: Business Communications Review, April 2006

Another associated reference architecture that is aligned with IMS and is being popularly pushed by software and equipment vendors in the enterprise world is Service Oriented Architecture (SOA) an architecture that focuses on services as the core design principle.

IMS has been developed by an industry consortium and originated in the mobile world in an attempt to define an infrastructure that could be used to standardise the delivery of new UMTS or 3G services. The original work was driven by 3GPP2 and TISPAN. Nowadays, just about every standards body seems to be involved including Open Mobile Alliance, ANSI, ITU, IETF, Parlay Group and Liberty Alliance – fourteen in total.

Like all new initiatives, IMS has developed its own mega-set of of T/F/FLAs (Three, four and five letter acronyms) which makes getting to grips with the architectural elements hard going without a glossary. I won’t go into this much here as there are much better Internet resources available: The reference architecture focuses on a three layer model:

#1 Applications layer:

The application layer contains Application Servers (AS) which host each individual service. Each AS communicated to the control plane using Session Initiation Protocol (SIP).  Like GSM, an AS can interrogate a database of users to check authorisation. The database is called the Home Subscriber Server (HSS) or an HSS in a 3rd party network if the user is roaming 9In GSM this is called the Home Location Register (HLR).

(Source: Lucent Technologies)

The application layer also contains Media Servers for storing and playing announcements and other generic applications not delivered by individual ASs, such as media conversion.

Breakout Gateways provide routing information based on telephone number looks-ups for services accessing a PSTN. This is similar functionality to that was found in IN systems discussed earlier.

PSTN gateways are used to interface to PSTN networks and include signalling and media gateways.

#2 Control layer:

The control plane hosts the HSS which is the master database of user identities and the individual calls or service sessions currently being used by each user. There are several roles that a SIP call / session controller can undertake:

  • P-CSCF (Proxy-CSCF) This provides similar functionality as a proxy server in an Intranet
  • S-CSCF (Serving-CSCF) This is the core SIP server always located in the home node
  • I-CSCF (Interrogating-CSCF) This is a SIP server located at a network’s edge and it’s address can be found in DNS servers by 3rd party SIP servers.

#3 Transport layer:

IMS encompasses any services that uses IP / MPLS as transport and pretty much all of the fixed and mobile access technologies including ADSL, cable modem DOCSIS, Ethernet, Wi-Fi, WIMAX and CDMA wireless. It has little choice in this matter as if IMS is to be used it needs to incorporate all of the currently deployed access technologies. Interestingly, as we saw in the DOCSIS post – The tale of DOCSIS and cable operators, IMS is also focusing on the of IPv6 with IPv4 ‘only’ being supported in the near term.

Roundup

IMS represents a tremendous amount of work spread over six years and uses as many existing standards as possible such as SIP and Parlay. IMS is work in progress and much still needs to be done – security and seamless inter-working of services are but two.

All the major telecommunications software, middleware and integrators are involved and just thinking about the scale of the task needed to put in place common control for a whole raft of services makes me wonder about just how practical the implementation of IMS actually is? Don’t take me wrong, I am a real supporter of the these initiatives because it is hard to come up with an alternative vision that makes sense, but boy I’m glad that I’m not in charge of a carrier IMS project!

The upsides of using IMS in the long term are pretty clear and focus around lowering costs, quicker time to market, integration of services and, hopefully, single log-in.

It’s some of the downsides that particularly concern me:

  • Non-migration of existing services: Like we saw in the early days of 3G, there are many services that would need to come under the umbrella of an IMS infrastructure such as instant conferencing, messaging, gaming, personal information management, presence, location based services, IP Centrex, voice self-service, IPTV, VoIP and many more. But, in reality, how do you commercially justify migrating existing services in the short term onto a brand new infrastructure – especially when that infrastructure is based on a non-completed reference architecture?

    IMS is a long term project that will be redefined many times as technology changes over the years. It is clearly an architecture that represents a vision for the future that can be used to guide and converge new developments but it will many years before carriers are running seamless IMS based services – if they ever will.

  • Single vendor lock-in: As with all complicated software systems, most IMS implementations will be dominated by a single equipment supplier or integrator. “Because vendors won’t cut up the IMS architecture the same way, multi-vendor solutions won’t happen, Moreover, that single supplier is likely to be an incumbent vendor.” This was quoted by Keith Nissen from InStat in a BCR article.
  • No launch delays: No product manager would delay the launch of a new service on the promise of jam tomorrow. While the IMS architecture is incomplete, services will continue to be rolled out without IMS further inflaming the Non-migration of existing services issue raised above.
  • Too ambitious: Is the vision of IMS just too ambitious? Integration of nearly every aspect of service delivery will be a challenge and a half for any carrier to undertake. It could be argued that while IT staff are internally focused getting IMS integration sorted they should be working on externally focused services. Without these services, customers will churn no matter how elegant a carrier’s internal architecture may be. Is IMS, Intelligent Networks reborn to suffer the same fate?
  • OSS integration: Any IMS system will need to integrate with carrier’s often proprietary OSS systems. This compounds the challenge of implementing even a limited IMS trial.
  • Source of innovation: It is often said that carriers are not the breeding ground of new, innovative services. This lies with small companies on the Internet creating Web 2.0 services that utilise such technologies as presence, VoIP and AJAX today. Will any of these companies care whether a carrier has an IMS infrastructure in place?
  • Closed shops – another walled garden?: How easy will it be for external companies to come up with a good idea for a new service and be able to integrate with a particular carrier’s semi-proprietary IMS infrastructure?
  • Money sink: Large integration projects like IMS often develop a life of their own once started and can often absorb vast amounts of money that could be better spent elsewhere.

I said at the beginning of the post that I felt uncomfortable about writing about IMS and now that I’m finished I am even more uncomfortable. I like the vision – how could I not? It’s just that I have to question how useful it will be at the end of the day and does it divert effort, money and limited resource away from where they should be applied – on creating interesting services and gaining market share. Only time will tell.

Addendum:  In a previous post, I wrote about the IETF’s Path Computation Element Working Group and it was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?


Follow

Get every new post delivered to your Inbox.